| Unplanned Outage | e3 Biometrics | 11/6/18 00:00 | 11/6/18 16:45 | 16:45 | N/A | | Upon investigating the backlog of NGI transactions e3 Support noticed 45 to 50 transactions, stuck processing. E3 support was able to determine that the transactions did not receive and external identifier. E3 support engaged the OBIM PAS team, and have initiated a bridge call with Matcher Support and e3 Developers to investigate. | | OBP;#OFO | | | 11/6/18 07:40 | e3 support | 11715612 | Yes | N/A | November 6, 2018 7:40 AM: E3 support noticed backlog
November 6, 2018 7:55 AM: Transactions have been sent to OBIM
November 6, 2018 10:34 AM : E3 support is investigating the backlog, This morning when e3 support arrived the team noticed 14 transactions in the backlog that were stuck processing starting from 12:00 AM to 4:38 AM, in which the team sent them over to OBIM to investigate. Following that 6 transactions came in between 5:51 am and 8:52 am and did not clear from the backlog. At 10:05a m 17 transactions came in within a 1minute of each other and are still processing. E3 support will provide more updates shortly, when we gather more details. At this point and time e3 support has not received any emails or calls from the field.
November 6, 2018 10:59 AM: There appears to be an issues with the External Ids not being sent or received between OBIM and e3. E3 support has contacted OBIM via email and phone call, in which we spoke with Edie on the OBIM pas team who is investigating if the external ids were received. E3 support has also requested e3 Development and the SCM team to investigate any failures within the IXM servers for the time frames of 12:00am – 4:30am, 5:51am – 9:00am and 10:00 – 10:30am.
November 6, 2018 12:06 PM: From the list below OBIM has identified the following transactions in red to have come through with bad fingerprints, and did not generate external id. Per OBIM they will have to examine them 1 by 1 and manually process. E3 Support is still investigating the remaining transactions in the backlog. Currently there is a total of 45 transactions stuck processing the 16 below.
November 6, 2018 12:56 PM: OBIM identified the following transactions below and have advised that they have sent a response back and a external id was generated. E3 support has noticed that these transactions are still appearing on our reports as not have generating an external id. A bridge call has been established with OBIM PAS team, Matcher Support and e3 Developers to investigate. As of now transactions are processing in real time and the backlog has remained around 45 to 50 transactions.
November 6, 2018 1:10 PM: Situational awareness was started: Incident Description and Impact Statement: Upon investigating the backlog of NGI transactions e3 Support noticed 45 to 50 transactions, stuck processing. E3 support was able to determine that the transactions did not receive and external identifier. E3 support engaged the OBIM PAS team, and have initiated a bridge call with Matcher Support and e3 Developers to investigate.
November 6, 2018 2:00 PM: Bridge call continues with OBIM PAS Team, Matcher Support and e3 Developers. The backlog report of transactions missing external identifiers is no longer being generated. Currently all transactions have initiated the external id. The latest backlog of 49 transitions have been sent over to OBIM, to determine why they are pending a response. Transactions continue to process in real time, although response times may be delayed.
November 6, 2018 2:28 PM: Bridge call has ended and transactions has been sent to OBIM for investigation. OBIM PAS team believes that the issue is possibly on the Matcher side. Transactions are trending down and currently at 35. Bridge call will reconvene at 3:00 PM with OBIM PAS Team, Matcher Support and e3 Developers.
November 6, 2018 4:49 PM: The OBIM Matcher team experienced an issue 3 weeks ago when applying a Suse patch to their Linux servers. The patching caused a JRE exception on their servers resulting in the image retrieval to break on the 1:1 and 10 print matching. The issue continues to persist on the servers with a workaround of the OBIM Matcher team bouncing the servers when they run into the JRE exception. This issue happened last night causing the backlog of issues with e3. OBIM Matcher team bounced the servers and is back to being fully operational. The solution is to apply another updated version of the Suse patch to the servers to resolve the issue. The OBIM Matcher team expects the new patching to continue through the end of year. E3 Has requested a schedule breakdown of which servers will have the patch applied and when. E3 is waiting on this schedule. As of now, the backlog reflect 12 transaction with only 4 from today’s issue. | | | OBIM matchers Impacting NGI transactions | | OBIM/IDENT | The OBIM Matcher team expects the new patching to continue through the end of year. E3 Has requested a schedule breakdown of which servers will have the patch applied and when. | The OBIM Matcher team experienced an issue 3 weeks ago when applying a Suse patch to their Linux servers. The patching caused a JRE exception on their servers resulting in the image retrieval to break on the 1:1 and 10 print matching. The issue continues to persist on the servers with a workaround of the OBIM Matcher team bouncing the servers when they run into the JRE exception. | OBIM | OBIM PAS team, OBIM matcher support, e3 Developers | users will be unable to access the criminal histories of subjects. Users sometimes require this data before they can determine how they need to process a subject.*Officer Safety* - without Criminal histories, users do not know who they have in custody and whether or not they have a history or violence. | N/A | OBIM | N/A | Agents were able to submit transactions but transcations were delayed | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Biometrics | 10/18/18 20:50 | 10/19/18 00:00 | 3:10 | N/A | | E3 Support called OBIM last night at 1:23 AM and they confirmed there was no issue on their end. E3 support called CJIS Watch commander at 1:39 AM and CJIS also confirmed there was no issue on their end. Transactions were processing in real time. On Friday, October 19, 2018 2:20 AM e3 support followed up with an email to OBIM to investigate the backlog of e3 transactions. The backlog started building at 8:53 PM Thursday until 12:03 AM Friday and requested that this be considered high priority. E3 Support also requested OBIM do a spot check on some of the TIDS to confirm if the external ID was received? E3 support received an email from the TOC Thursday, October 18, 2018 10:41 PM in reference to IXM services alerting in AppDynamics. The actual alert in the email shows a time of 10:33 PM. OBIM did confirm they received the external ID for the TIDs in question. After further investigation e3 discovered that there was Linux Patching going on Thursday night 10/18/18 which affected 4 of the 6 IXM VM’s (2 were offline). With two VM’s offline there were not enough database connections available (there were 80 less connections). With the new photo services online all connections were needed. A IAFIS backlog started to build & overwhelmed the available resources until the patching was over. More connections were allocated to the IXM servers while e3 worked with OBIM to bring down the backlog. OBIM was able to resubmit the transactions that were impacted during the patching and at 1:44 PM on 10/19 the backlog of transactions from 10/18 incident between 9:00 AM Thursday and Friday 12:00 AM in the morning were resubmitted and fully processed and cleared from the e3 backlog report. A new backlog started growing at 12:45 PM. There was approx. 43 transactions still processing. E3 support has reached out to OBIM to investigate as well as CJIS although the transactions were still within the SLA of 2 hours. By 2:40pm OBIM noted that they were receiving delayed responses from CJIS and sent out notification. OBIM spun up a bridge call, within minutes of joining the call e3 Support received notification from OBIM that CJIS was processing in real time (3pm). | | OBP;#OFO;#OFO/SIGMA | | | 10/23/18 01:00 | CBP TOC | N/A | Yes | N/A | | | | LINUX Patching Impacting IXM Serveries and Biometrics submissions | LINUX | E3 | increased the Data Source Maximum Connection pull from 40 - 60 | LINUX patching Impacting IXM Serveries and Biometrics submissions
| CBP LINUX team | OBIM, NGI/CJIS, SCM | All Agents/Officers in the field are unable to retrieve responses between 8:50pm Thursday until 12:03AM Friday for Booking transcations caused a significatant rise in subjects in custody during that time. | N/A | LINUX | increase the Data Source Connection from 40 - 60 | YES | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 FPQ | 10/18/18 20:50 | 10/19/18 00:00 | 3:10 | N/A | | E3 Support called OBIM last night at 1:23 AM and they confirmed there was no issue on their end. E3 support called CJIS Watch commander at 1:39 AM and CJIS also confirmed there was no issue on their end. Transactions were processing in real time. On Friday, October 19, 2018 2:20 AM e3 support followed up with an email to OBIM to investigate the backlog of e3 transactions. The backlog started building at 8:53 PM Thursday until 12:03 AM Friday and requested that this be considered high priority. E3 Support also requested OBIM do a spot check on some of the TIDS to confirm if the external ID was received? E3 support received an email from the TOC Thursday, October 18, 2018 10:41 PM in reference to IXM services alerting in AppDynamics. The actual alert in the email shows a time of 10:33 PM. OBIM did confirm they received the external ID for the TIDs in question. After further investigation e3 discovered that there was Linux Patching going on Thursday night 10/18/18 which affected 4 of the 6 IXM VM’s (2 were offline). With two VM’s offline there were not enough database connections available (there were 80 less connections). With the new photo services online all connections were needed. A IAFIS backlog started to build & overwhelmed the available resources until the patching was over. More connections were allocated to the IXM servers while e3 worked with OBIM to bring down the backlog. OBIM was able to resubmit the transactions that were impacted during the patching and at 1:44 PM on 10/19 the backlog of transactions from 10/18 incident between 9:00 AM Thursday and Friday 12:00 AM in the morning were resubmitted and fully processed and cleared from the e3 backlog report. A new backlog started growing at 12:45 PM. There was approx. 43 transactions still processing. E3 support has reached out to OBIM to investigate as well as CJIS although the transactions were still within the SLA of 2 hours. By 2:40pm OBIM noted that they were receiving delayed responses from CJIS and sent out notification. OBIM spun up a bridge call, within minutes of joining the call e3 Support received notification from OBIM that CJIS was processing in real time (3pm). | | OBP;#OFO;#OFO/SIGMA | | | 10/23/18 01:00 | CBP TOC | N/A | Yes | N/A | | | | LINUX Patching Impacting IXM Serveries and Biometrics submissions | LINUX | E3 | increased the Data Source Maximum Connection pull from 40 - 60 | LINUX patching Impacting IXM Serveries and Biometrics submissions
| CBP LINUX team | OBIM, NGI/CJIS, SCM | All Agents/Officers in the field are unable to retrieve responses between 8:50pm Thursday until 12:03AM Friday for Booking transcations caused a significatant rise in subjects in custody during that time. | N/A | LINUX | increase the Data Source Connection from 40 - 60 | YES | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 10/16/18 00:45 | 10/16/18 03:05 | 2:20 | N/A | | Incident Description and Impact Statement: CBP Technology Service Desk received call from the field reporting SQL errors querying cases. Duty Officers established a bridge call. E3 support is currently investigating the impact to users and will provide updates as they come.
| | OBP;#OFO;#OFO/SIGMA | | | 10/16/18 01:00 | CBP TSD | 11617925 | Yes | N/A | 1:00 AM – CBP TSD contacted e3 Support
1:25 AM – Duty Officer established bridge call
1:28 AM – e3 Support member Nielab and Ikram joined the bridge call
2:05 AM – Shalini joined the call
2:19 AM – Situational awareness was sent out
2:25 AM – Jose Villafane made positive contact with Stephanie Trieno from ICE EID team
2:50 AM – Nadine ICE DBA joined the call
2:57 AM – EWS team joined the call
3:03 AM – Nadine added 10G space to the SYSAUX table
3:07 AM - E3 Support verified with the field they were no longer receiving the SQL Errors.
3:12 AM – Bridge call ended
| | | ICE EID SQL Errors Responses | | ICE/EID | Resolution: While the issue started after the EDME LAN CR89040 was completed, the issue was determined to be on the EID side. E3 Support made positive contact with ICE DBA Stephanie who reached out to ICE DBA Nadine Azie. At 3:03AM DBA Nadine allocated 10gb that can expand up to 100gb (by 1 gb at a time) more table space in SYSAUX table. At 3:07AM E3 Support verified with the field they were no longer receiving the SQL Errors. | There wasnt enough tablespace in the database which was causing the SQL error to display in e3 modules.
Date:10/16/2018 0111 Error Code: Transaction ID:Unknown
Short Message:SQL error: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000700789C00003$$ by 64 in tablespace SYSAUX
See Detailed Message | CBP, ICE | EDMED LAN, EDMED EWS, e3 Support, ICE DBA | All Agents/Officers in the field are unable to access all of the e3 Application causing a significate rise in wait time of the subjects in custody. | N/A | ICE DBA | N/A | Inaccessible | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 10/16/18 00:45 | 10/16/18 03:05 | 2:20 | N/A | | Incident Description and Impact Statement: CBP Technology Service Desk received call from the field reporting SQL errors querying cases. Duty Officers established a bridge call. E3 support is currently investigating the impact to users and will provide updates as they come.
| | OBP;#OFO;#OFO/SIGMA | | | 10/16/18 01:00 | CBP TSD | 11617925 | Yes | N/A | 1:00 AM – CBP TSD contacted e3 Support
1:25 AM – Duty Officer established bridge call
1:28 AM – e3 Support member Nielab and Ikram joined the bridge call
2:05 AM – Shalini joined the call
2:19 AM – Situational awareness was sent out
2:25 AM – Jose Villafane made positive contact with Stephanie Trieno from ICE EID team
2:50 AM – Nadine ICE DBA joined the call
2:57 AM – EWS team joined the call
3:03 AM – Nadine added 10G space to the SYSAUX table
3:07 AM - E3 Support verified with the field they were no longer receiving the SQL Errors.
3:12 AM – Bridge call ended
| | | ICE EID SQL Errors Responses | | ICE/EID | Resolution: While the issue started after the EDME LAN CR89040 was completed, the issue was determined to be on the EID side. E3 Support made positive contact with ICE DBA Stephanie who reached out to ICE DBA Nadine Azie. At 3:03AM DBA Nadine allocated 10gb that can expand up to 100gb (by 1 gb at a time) more table space in SYSAUX table. At 3:07AM E3 Support verified with the field they were no longer receiving the SQL Errors. | There wasnt enough tablespace in the database which was causing the SQL error to display in e3 modules.
Date:10/16/2018 0111 Error Code: Transaction ID:Unknown
Short Message:SQL error: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000700789C00003$$ by 64 in tablespace SYSAUX
See Detailed Message | CBP, ICE | EDMED LAN, EDMED EWS, e3 Support, ICE DBA | All Agents/Officers in the field are unable to access all of the e3 Application causing a significate rise in wait time of the subjects in custody. | N/A | ICE DBA | N/A | Inaccessible | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Detentions | 10/16/18 00:45 | 10/16/18 03:05 | 2:20 | N/A | | Incident Description and Impact Statement: CBP Technology Service Desk received call from the field reporting SQL errors querying cases. Duty Officers established a bridge call. E3 support is currently investigating the impact to users and will provide updates as they come.
| | OBP;#OFO;#OFO/SIGMA | | | 10/16/18 01:00 | CBP TSD | 11617925 | Yes | N/A | 1:00 AM – CBP TSD contacted e3 Support
1:25 AM – Duty Officer established bridge call
1:28 AM – e3 Support member Nielab and Ikram joined the bridge call
2:05 AM – Shalini joined the call
2:19 AM – Situational awareness was sent out
2:25 AM – Jose Villafane made positive contact with Stephanie Trieno from ICE EID team
2:50 AM – Nadine ICE DBA joined the call
2:57 AM – EWS team joined the call
3:03 AM – Nadine added 10G space to the SYSAUX table
3:07 AM - E3 Support verified with the field they were no longer receiving the SQL Errors.
3:12 AM – Bridge call ended
| | | ICE EID SQL Errors Responses | | ICE/EID | Resolution: While the issue started after the EDME LAN CR89040 was completed, the issue was determined to be on the EID side. E3 Support made positive contact with ICE DBA Stephanie who reached out to ICE DBA Nadine Azie. At 3:03AM DBA Nadine allocated 10gb that can expand up to 100gb (by 1 gb at a time) more table space in SYSAUX table. At 3:07AM E3 Support verified with the field they were no longer receiving the SQL Errors. | There wasnt enough tablespace in the database which was causing the SQL error to display in e3 modules.
Date:10/16/2018 0111 Error Code: Transaction ID:Unknown
Short Message:SQL error: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000700789C00003$$ by 64 in tablespace SYSAUX
See Detailed Message | CBP, ICE | EDMED LAN, EDMED EWS, e3 Support, ICE DBA | All Agents/Officers in the field are unable to access all of the e3 Application causing a significate rise in wait time of the subjects in custody. | N/A | ICE DBA | N/A | Inaccessible | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 10/16/18 00:45 | 10/16/18 03:05 | 2:20 | N/A | | Incident Description and Impact Statement: CBP Technology Service Desk received call from the field reporting SQL errors querying cases. Duty Officers established a bridge call. E3 support is currently investigating the impact to users and will provide updates as they come.
| | OBP;#OFO;#OFO/SIGMA | | | 10/16/18 01:00 | CBP TSD | 11617925 | Yes | N/A | 1:00 AM – CBP TSD contacted e3 Support
1:25 AM – Duty Officer established bridge call
1:28 AM – e3 Support member Nielab and Ikram joined the bridge call
2:05 AM – Shalini joined the call
2:19 AM – Situational awareness was sent out
2:25 AM – Jose Villafane made positive contact with Stephanie Trieno from ICE EID team
2:50 AM – Nadine ICE DBA joined the call
2:57 AM – EWS team joined the call
3:03 AM – Nadine added 10G space to the SYSAUX table
3:07 AM - E3 Support verified with the field they were no longer receiving the SQL Errors.
3:12 AM – Bridge call ended
| | | ICE EID SQL Errors Responses | | ICE/EID | Resolution: While the issue started after the EDME LAN CR89040 was completed, the issue was determined to be on the EID side. E3 Support made positive contact with ICE DBA Stephanie who reached out to ICE DBA Nadine Azie. At 3:03AM DBA Nadine allocated 10gb that can expand up to 100gb (by 1 gb at a time) more table space in SYSAUX table. At 3:07AM E3 Support verified with the field they were no longer receiving the SQL Errors. | There wasnt enough tablespace in the database which was causing the SQL error to display in e3 modules.
Date:10/16/2018 0111 Error Code: Transaction ID:Unknown
Short Message:SQL error: ORA-01691: unable to extend lob segment SYS.SYS_LOB0000700789C00003$$ by 64 in tablespace SYSAUX
See Detailed Message | CBP, ICE | EDMED LAN, EDMED EWS, e3 Support, ICE DBA | All Agents/Officers in the field are unable to access all of the e3 Application causing a significate rise in wait time of the subjects in custody. | N/A | ICE DBA | N/A | Inaccessible | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 10/15/18 08:30 | 10/15/18 10:40 | 2:10 | N/A | | Incident Description and Impact Statement: DataPower Support has observed that the DoD Automated Biometric Identification System (DoD ABIS) is not able to process biometric transactions in a timely manner. E3 Support has contacted the ABIS Watch Desk and they have acknowledge that there sFTP server is down. They are working on the issues and have advised there systems will be up within the next 30 minutes. The current ABIS backlog report, reflects there is a backlog of approx. 86 transactions starting from 8:30 AM this morning. | | OBP;#OFO;#OFO/SIGMA | | | 10/15/18 10:30 | DataPower | 11612402 | Yes | N/A | Incident Description and Impact Statement: DataPower Support has observed that the DoD Automated Biometric Identification System (DoD ABIS) is not able to process biometric transactions in a timely manner. E3 Support has contacted the ABIS Watch Desk and they have acknowledge that there sFTP server is down. They are working on the issues and have advised there systems will be up within the next 30 minutes. The current ABIS backlog report, reflects there is a backlog of approx. 86 transactions starting from 8:30 AM this morning.
Resolution: Nicole from DoD ABIS indicates was brought up around 10:40 AM. E3 Support reached out to DataPower in which they confirm connection to DoD has been restored. E3 developers has advised we are receiving 200 responses from ABIS and number of transaction are trending down. The backlog currently reflects all transactions submitted 8:30am and 10:30am during the time DoD sFTP servers were down. E3 Support will determine how to clear the impacted transactions. | | | DoD ABIS sFTP Servers Down | | DoD ABIS | DoD ABIS Engineers restoring their sFTP servers. | Root Cauase not provided. | DoD ABIS | e3 Dev, DataPower, DoD ABIS Watch Desk | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. Agents are not required to hold subjects until ABIS returns online. The highest level supervisor at the station will be the final deciding official on the final detention disposition. | N/A | DoD ABIS Engineers | N/A | Yes | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 10/15/18 08:30 | 10/15/18 10:40 | 2:10 | N/A | | Incident Description and Impact Statement: DataPower Support has observed that the DoD Automated Biometric Identification System (DoD ABIS) is not able to process biometric transactions in a timely manner. E3 Support has contacted the ABIS Watch Desk and they have acknowledge that there sFTP server is down. They are working on the issues and have advised there systems will be up within the next 30 minutes. The current ABIS backlog report, reflects there is a backlog of approx. 86 transactions starting from 8:30 AM this morning. | | OBP;#OFO;#OFO/SIGMA | | | 10/15/18 10:30 | DataPower | 11612402 | Yes | N/A | Incident Description and Impact Statement: DataPower Support has observed that the DoD Automated Biometric Identification System (DoD ABIS) is not able to process biometric transactions in a timely manner. E3 Support has contacted the ABIS Watch Desk and they have acknowledge that there sFTP server is down. They are working on the issues and have advised there systems will be up within the next 30 minutes. The current ABIS backlog report, reflects there is a backlog of approx. 86 transactions starting from 8:30 AM this morning.
Resolution: Nicole from DoD ABIS indicates was brought up around 10:40 AM. E3 Support reached out to DataPower in which they confirm connection to DoD has been restored. E3 developers has advised we are receiving 200 responses from ABIS and number of transaction are trending down. The backlog currently reflects all transactions submitted 8:30am and 10:30am during the time DoD sFTP servers were down. E3 Support will determine how to clear the impacted transactions. | | | DoD ABIS sFTP Servers Down | | DoD ABIS | DoD ABIS Engineers restoring their sFTP servers. | Root Cauase not provided. | DoD ABIS | e3 Dev, DataPower, DoD ABIS Watch Desk | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. Agents are not required to hold subjects until ABIS returns online. The highest level supervisor at the station will be the final deciding official on the final detention disposition. | N/A | DoD ABIS Engineers | N/A | Yes | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 10/4/18 08:55 | 10/4/18 17:15 | 8:20 | N/A | | Incident Description and Impact Statement: The DoD Automated Biometric Identification System (DoD ABIS) is not able to process biometric transactions in a timely manner. E3 Support contacted DFBA Watch Desk for further investigation. Estimated time to resume normal operations is unknown at this time. E3 Support will monitor the situation and provide updates as they become available.
Update 1: Mike Gibson at DFBA confirmed that DOD ABIS had an interruption in service & that they restarted their sFTP servers. By 12:14 e3 Engineers were seeing traffic flow to & responses coming from ABIS. The Queue has dropped & e3 is monitoring along with DataPower for transaction flow.
Update 2: e3 Support is sending this transaction report over to ABIS to confirm whether or not they have received them into their system. If not e3 will be clearing the offending transactions from the log by setting them to complete.
Update 3: Bridge number: 888-546-3727 X18583332#
Update 4: Bridge call was spun up & joined by e3, Duty Officers, ABIS sFTP group, and DataPower. Engineers are researching 404 errors & DataPower confirmed there have been multiple connection drops before the handshake to ABIS servers. Most transaction stuck have been between 0859 & 1300 hours so the connection drop may have affected the responses back to e3 servers. Nicole (ABIS sFTP Group) confirmed responses were sent back to e3 from the transaction log sent over. Engineers continue to probe the 404 & 401 errors along with successful responses. The total number of connection drop times are 1437 1432 1429 1409 1200 & 1130 EDT. The bridge is still waiting for the CBP NOC to join bridge. The backlog count has spiked again to 84 (most likely due to the connection drops). A second look at the recent backlog confirmed that the responses haven’t been downloaded by e3. Engineers continue to investigate.
RESOLVED: e3 Support has monitored the ABIS queue for 90 minutes & has concluded that the system has been stable long enough & processing transactions in real time long enough to justify ending the bridge call. Backlog has completely drained & there have been no spikes of any kind. One_net has noticed that DataPower is overloading the ABIS connection with data traffic & that could be what caused the timeouts to & from ABIS. Because of This e3 Support will continue to monitor for several more hours. ABIS could not give us details on why their system couldn’t deliver the responses to e3. | | OBP;#OFO;#OFO/SIGMA | | | 10/4/18 17:15 | e3 Support | 11574502 | Yes | N/A | 11:37 AM – ABIS backlog was noticed. E3 Support investigated App Dynamic and the servers are throwing tons of error for submission
11:53 AM - e3 Management was notified via email about ABIS backlog
12:08 PM – e3 Support sent first Situational Awareness
11:59 AM – Senior Software Developer (Hussein) reported seeing 500 error in the log
12:29 PM – e3 Support reached out to Mike Gibson
1:38 PM – List of ABIS transaction was forwarded to Mike Gibson
1:59 PM – Bridge call established between e3 Development team and Datapower team
2:05 PM – Datapower saw internal 500 error
2:10 PM – Datapower confirmed not seeing any issue
2:13 PM - Datapower reported that request processing failed connection was terminated
2:20 PM - Datapower reported SSL handshake was lost
2:24 PM – Duty Officers reached out to NOC
2:26 PM – John from Datapower joined the call
2:27 PM – Nicole from ABIS joined the call
2:29 PM – Nicole stated that she is not part of DoD ABIS she is with the sFTP department she reported that transactions were submitted successfully and responses were received by e3 Nicole stated that sFTP functioning 100%
2:35 PM – Datapwer is investigating the 404 &401 error and source and destination IP address were provided
2:47 PM – Brandon reported that backlog is building up as of 2:07 PM
2:48 PM – John from Datapower reported that there are 404 error around the time of 2:07 seeing in the log
2:49 PM – Waiting for CBP NOC to join
2:50 PM – Lars reported the issue is not on the server side
2:52 PM – Nicole dropped of the call. Update list was sent over to Nicole to see if responses were received on the DoD side
2:55 PM – Nicole stated that ABIS sends out the responses but e3 is not downloading the response on the majority of the transactions
2:58 PM – Paul from CBP NOC joined he is doing check on the DMZ side
3:05 PM – Shawn from CBP NOC joined
3:16 PM – Speeding up the timers will not resolve the underline issue Nikhil stated
3:20 PM – e3 Support reaching out to Nicole from ABIS to provide clarity on what she saw
3:27 PM – Nikhil is in the process of speeding up timers
3:28 PM – Paul 13.14 does not go through DMZ, but goes through F5 servers.
3:30 PM – Brandon observed 20-23 transactions almost every hour pending, but receiving
3:30 PM - Halts DO joined to replace Joshua
3:36 PM – Nadim from EDME LAN joined
3:37 PM – Nadim going on to check on F5
3:47 PM – Brandon long dropped off the bridge call, Terrance hall from e3 will continue to lead the bridge
3:49 PM – Nadim doesn’t see anything
3:54 PM – Lars dropped off the call
3:53 PM – Paul asking for additional, John
3:59 PM – Nadim says there was 30 sec dropped from all 3 servers
3:59 PM – John Datapower states the connection is timing out after 2 mins
4:00 PM – Nikhil sped up the timers to 15 sec to draw response from ABIS
4:11 PM – Paul states it’s not on CBP
4:12 PM – Nadim shows the connections
4:13 PM – Halts DO states to monitor until 4:30 PM to stand down the bridge
4:16 PM – e3 Support marked transactions from 8:59am to 11:19am to complete
4:27 PM – John saw connection drop
4:29 PM – requesting Tanveer to have OneNet to join the bridge
4:36 PM – reached out to OneNet to Join the Bridge
4:56 PM – Aris from OneNet joined
4:59 PM – Aris shows there may be an issues on the destination side. 198.72.662.175
5:05 PM – e3 Support reached out to DoD ABIS to get on the bridge
5:12 PM – e3 Support determined since the transaction were processing in real time, we decided to stand DoD ABIS stand down
5:14 PM - Bridge call ended | | | (DoD ABIS) Unable to Process in Real Time | | E3 | One of the sFTP servers were restarted by ABIS Engineers and e3 developers sped up ABIS timer | ABIS could not give us details on why their system couldn’t deliver the responses to e3. | ABIS, e3 Support | ABIS, EDME LAN, DataPower, OneNet, CBP NOC, Duty Officer | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. Agents are not required to hold subjects until ABIS returns online. The highest level supervisor at the station will be the final deciding official on the final detention disposition. | N/A | DoD ABIS | N/A | YES | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 10/4/18 08:55 | 10/4/18 17:15 | 8:20 | N/A | | Incident Description and Impact Statement: The DoD Automated Biometric Identification System (DoD ABIS) is not able to process biometric transactions in a timely manner. E3 Support contacted DFBA Watch Desk for further investigation. Estimated time to resume normal operations is unknown at this time. E3 Support will monitor the situation and provide updates as they become available.
Update 1: Mike Gibson at DFBA confirmed that DOD ABIS had an interruption in service & that they restarted their sFTP servers. By 12:14 e3 Engineers were seeing traffic flow to & responses coming from ABIS. The Queue has dropped & e3 is monitoring along with DataPower for transaction flow.
Update 2: e3 Support is sending this transaction report over to ABIS to confirm whether or not they have received them into their system. If not e3 will be clearing the offending transactions from the log by setting them to complete.
Update 3: Bridge number: 888-546-3727 X18583332#
Update 4: Bridge call was spun up & joined by e3, Duty Officers, ABIS sFTP group, and DataPower. Engineers are researching 404 errors & DataPower confirmed there have been multiple connection drops before the handshake to ABIS servers. Most transaction stuck have been between 0859 & 1300 hours so the connection drop may have affected the responses back to e3 servers. Nicole (ABIS sFTP Group) confirmed responses were sent back to e3 from the transaction log sent over. Engineers continue to probe the 404 & 401 errors along with successful responses. The total number of connection drop times are 1437 1432 1429 1409 1200 & 1130 EDT. The bridge is still waiting for the CBP NOC to join bridge. The backlog count has spiked again to 84 (most likely due to the connection drops). A second look at the recent backlog confirmed that the responses haven’t been downloaded by e3. Engineers continue to investigate.
RESOLVED: e3 Support has monitored the ABIS queue for 90 minutes & has concluded that the system has been stable long enough & processing transactions in real time long enough to justify ending the bridge call. Backlog has completely drained & there have been no spikes of any kind. One_net has noticed that DataPower is overloading the ABIS connection with data traffic & that could be what caused the timeouts to & from ABIS. Because of This e3 Support will continue to monitor for several more hours. ABIS could not give us details on why their system couldn’t deliver the responses to e3. | | OBP;#OFO;#OFO/SIGMA | | | 10/4/18 17:15 | e3 Support | 11574502 | Yes | N/A | 11:37 AM – ABIS backlog was noticed. E3 Support investigated App Dynamic and the servers are throwing tons of error for submission
11:53 AM - e3 Management was notified via email about ABIS backlog
12:08 PM – e3 Support sent first Situational Awareness
11:59 AM – Senior Software Developer (Hussein) reported seeing 500 error in the log
12:29 PM – e3 Support reached out to Mike Gibson
1:38 PM – List of ABIS transaction was forwarded to Mike Gibson
1:59 PM – Bridge call established between e3 Development team and Datapower team
2:05 PM – Datapower saw internal 500 error
2:10 PM – Datapower confirmed not seeing any issue
2:13 PM - Datapower reported that request processing failed connection was terminated
2:20 PM - Datapower reported SSL handshake was lost
2:24 PM – Duty Officers reached out to NOC
2:26 PM – John from Datapower joined the call
2:27 PM – Nicole from ABIS joined the call
2:29 PM – Nicole stated that she is not part of DoD ABIS she is with the sFTP department she reported that transactions were submitted successfully and responses were received by e3 Nicole stated that sFTP functioning 100%
2:35 PM – Datapwer is investigating the 404 &401 error and source and destination IP address were provided
2:47 PM – Brandon reported that backlog is building up as of 2:07 PM
2:48 PM – John from Datapower reported that there are 404 error around the time of 2:07 seeing in the log
2:49 PM – Waiting for CBP NOC to join
2:50 PM – Lars reported the issue is not on the server side
2:52 PM – Nicole dropped of the call. Update list was sent over to Nicole to see if responses were received on the DoD side
2:55 PM – Nicole stated that ABIS sends out the responses but e3 is not downloading the response on the majority of the transactions
2:58 PM – Paul from CBP NOC joined he is doing check on the DMZ side
3:05 PM – Shawn from CBP NOC joined
3:16 PM – Speeding up the timers will not resolve the underline issue Nikhil stated
3:20 PM – e3 Support reaching out to Nicole from ABIS to provide clarity on what she saw
3:27 PM – Nikhil is in the process of speeding up timers
3:28 PM – Paul 13.14 does not go through DMZ, but goes through F5 servers.
3:30 PM – Brandon observed 20-23 transactions almost every hour pending, but receiving
3:30 PM - Halts DO joined to replace Joshua
3:36 PM – Nadim from EDME LAN joined
3:37 PM – Nadim going on to check on F5
3:47 PM – Brandon long dropped off the bridge call, Terrance hall from e3 will continue to lead the bridge
3:49 PM – Nadim doesn’t see anything
3:54 PM – Lars dropped off the call
3:53 PM – Paul asking for additional, John
3:59 PM – Nadim says there was 30 sec dropped from all 3 servers
3:59 PM – John Datapower states the connection is timing out after 2 mins
4:00 PM – Nikhil sped up the timers to 15 sec to draw response from ABIS
4:11 PM – Paul states it’s not on CBP
4:12 PM – Nadim shows the connections
4:13 PM – Halts DO states to monitor until 4:30 PM to stand down the bridge
4:16 PM – e3 Support marked transactions from 8:59am to 11:19am to complete
4:27 PM – John saw connection drop
4:29 PM – requesting Tanveer to have OneNet to join the bridge
4:36 PM – reached out to OneNet to Join the Bridge
4:56 PM – Aris from OneNet joined
4:59 PM – Aris shows there may be an issues on the destination side. 198.72.662.175
5:05 PM – e3 Support reached out to DoD ABIS to get on the bridge
5:12 PM – e3 Support determined since the transaction were processing in real time, we decided to stand DoD ABIS stand down
5:14 PM - Bridge call ended | | | (DoD ABIS) Unable to Process in Real Time | | E3 | One of the sFTP servers were restarted by ABIS Engineers and e3 developers sped up ABIS timer | ABIS could not give us details on why their system couldn’t deliver the responses to e3. | ABIS, e3 Support | ABIS, EDME LAN, DataPower, OneNet, CBP NOC, Duty Officer | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. Agents are not required to hold subjects until ABIS returns online. The highest level supervisor at the station will be the final deciding official on the final detention disposition. | N/A | DoD ABIS | N/A | YES | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 9/20/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 9/20/18 05:00 | 9/20/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 9/20/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 9/20/18 05:00 | 9/20/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Detentions | 9/20/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 9/20/18 05:00 | 9/20/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 OASISS | 9/20/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 9/20/18 05:00 | 9/20/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Processing | 9/20/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 9/20/18 05:00 | 9/20/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Prosecutions | 9/20/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 9/20/18 05:00 | 9/20/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 9/12/18 09:50 | 9/13/18 16:50 | 7:00 | CBP #11482695; DISA #2172898 | | Incident Description and Impact Statement: E3Biometrics is seeing intermittent timeouts trying to connect to DOD. Per DataPower, a mixture of HTTP 200, 401, and 404 responses has been identified with 86% of the transactions having errors in the past hour. e3 support has reached out to Duty Officer and requested a bridge to be established.
Update 1: bridge call has been established between e3 support, Duty Officers and John Leabhart from Datapower team. E3 Support is reaching out to DoD ABIS Watch Desk to gather updates on their findings.
Update 2: e3 Support made positive contact with Michael Gibson at DoD ABIS. Per Michael, a DHS CBP account is repeatedly uploading a file. E3 Developers have confirmed that the transaction is showing complete in the logs. Michael Gibson has advised he will be joining the bridge call to assist in troubleshooting efforts. E3 support is in the process of reaching out to DFBA to confirm if the transaction that was provided is still showing as stuck in a loop.
Update 3: Michael Gibson advised that ABIS engineers are in contact with DISA engineers to rule out networking issues between DISA and sFTP server. CBP Duty Officers have requested CBP NOC and EDME Lan to join the call to further investigate if there are network connection issue between Datapower going out to DoD. Source destination and IP address have been provided to EDME Lan to run trance routes between Datapower and DoD ABIS.
Update 4: Michael Gibson reported DoD has Lost entire Fiber nodes and IP connection between DoD and Commercial all traffics are being routed in different direction. Michael Gibson will forward a consolidated list of transactions that are looping to be cleared. At approx. 2:44pm SCM Lars Sisson restarted the ABIS servers which were in a warning state. Per Michael Gibson the issue resides on the DISA side which is impacting more than e3 connections. Engineers are actively running trace routes & awaiting updates from DISA.
Update 5: After ascertaining that there are no problems on the e3/ OBIM/ ONENET side pertaining to the DOD ABIS connectivity issue, e3 is closing down the bridge for now. E3 Support will be getting updates from DOD ABIS via the 1800-HELPDISA . E3 Support will monitor the backlog & send updates every hour until the queue has drained & processing in real time.
Update 6: E3 Support contacted the DISA help desk. Based off the notes in the ticket provided to e3 support ; 1723 Zulu transport has visibility back to the Columbus ODXC, working with OSS to see if they have correct back up configuration files in order to restore the Node. If they do not we will have to build all the trunks and circuits 1 by 1. Tier 2 transport is currently working to configure the node at this time. Node is down causing 400 plus multiple circuits. Engineers are working to reconfigure the device in order to restore all traffic entering into this node. E3 Support will monitor the backlog & send updates.
Update 7: E3 Support reached out to Defense Information System Agency (DISA) helpdesk for update, spoke with Glenda she stated that DISA System is Down. There is no ETA when normal operations will resume. E3 Support will monitor the backlog and provide updates.
Update 8: E3 support reached out to DISA helpdesk. Ashley from DISA stated that engineers continue to work on OSS to see if they have correct back up configuration files in order to restore the Node in question. There is no ETA at this time for resolution.
Update 9 : E3 Support contacted DISA help desk for further update. Based off the notes in the ticket provided to e3 support ; FSE has departed to the issue site inventories everything that comes in and out. There is no ETA at this time for resolution. E3 Support will monitor the backlog and provide updates.
Update 10: E3 Support contacted DISA help desk. Derrick from DISA stated that Engineers are still trouble shooting. Based off the notes in the ticket provided to e3 support ;1330 Zulu DGOC east ,Tier 2 report access to SYLMVX through pack server has been up for a while in access to Europe server just restored. Team is currently continuing to rebuild circuits manually. E3 Support will monitor the backlog and provide updates.
Update 11: E3 Support contacted DISA help desk. Sam from DISA stated that Engineers are still trouble shooting. Based off the notes in the ticket provided to e3 support, CONUS SYLMVX went down at 0200 Zulu. Transport Tier2 started working out of Europe SYLMVX to continue to manually rebuild all circuits on the node working of the Watch Officer list first. Transport is working out of pack SYLMVX but connection is slow. OSS notified of SYLMVX issue and CONUS and Europe and now pack Tier 2 group on all theaters engaged and working on rebuilding circuits on the node.
Update 12: At 10:25 AM, e3 Support received notification from The DoD Automated Biometric Identification System (DoD ABIS) that they are not able to process biometric transactions. E3 Support contacted DFBA Watch Desk for further investigation. Estimated time to resume normal operations is unknown at this time. E3 Support will monitor the situation and provide updates as they become available.
Update 13: e3 Support reached out to DFBA Watch Desk and spoke with Shayia. ABIS engineers are continuing to troubleshoot the issue with their sFTP server. Estimated time to resume normal operations is unknown at this time.
Update 14: As of 11:51 AM, The DoD Automated Biometric Identification System (DoD ABIS) has reported that automated submissions are processing as normal. However, manual process is affected by the partial outage that DoD is experiencing. The backlog of ABIS has declined and current count is 79 transactions. E3 Support will continue monitoring until the backlog reaches normal level.
Update 15: As of 12:38 PM, DoD ABIS has reported their outage is complete and can now process transactions. E3 Support will continue monitoring until the backlog reaches normal levels.
Update 16: Per Senior Software Developer (Hussein), we’re receiving responses from DoD. Senior Software Developer (Hussein) also confirmed that query was run for 24 hours therefore it was showing total of 79 transactions earlier. As of 2:10 PM, current count for the backlog is at 116. E3 Support will monitor the situation until backlog clears.
Update 17: The backlog of ABIS transactions slowly declining. The new transactions are processing in real time. Current backlog shows total of 147 transactions and 111 transactions are associated with ABIS issue that are over the SLA. E3 Support will continue monitoring until backlog gets to normal level.
UPDATE 18: Backlog continues to slowly decline. An update from Michael Gibson at DOD ABIS confirms that all of the transaction files submitted to ABIS did make it to their servers so he’s going to check the integrity of those files and get back to us. E3 will monitor & report as needed until numbers return to normal
Resolved: As of 4:53 PM e3 support has confirmed that ABIS transactions are processing in real time. Initially e3 support received notification from the Middleware team that DataPower reported seeing intermittent timeouts trying to connect to DoD. Data power reported seeing a mixture of HTTP 200, 401, and 404 responses, and that 86% of the transactions were receiving errors in the past hour. After extensive troubleshooting efforts EDME LAN, CBP NOC, and DHS ONET were able to confirm utilizing the source destination address IP’s that there were no network connectivity issue between DataPower and DoD. Due to the time outs DataPower continued to see, Michael Gibson from DFBA reached out to engineers that were trouble shooting a network issue between DISA and DoD. Michael Gibson provided DISA Ticket# 217898 associated to a node being down causing traffic to route to different directions. E3 Support received notification on 9/13/2018 at 10:26 AM for an unplanned outage from DoD ABIS that they were unable to process Biometrics transactions, which was resolved at 12:29 PM. Michael Gibson with DFBA reported that they were able to successfully see inbound and out bound transmissions from the DHS CBP account to the DoD server. A list of transactions that were impacted during this time were sent over to Michael Gibson. Michael Gibson advised that the transmitted files did in fact make it to their server, and that he would take a closer look into the transaction in question to ensure that the files were completely transmitted. Once e3 support receives confirmation from Michael Gibson in regards to the transactions in question a decision on how to clear the remaining transactions will be made. | | OBP;#OFO;#OFO/SIGMA | | | 9/12/18 10:00 | Datapower | 11482695 | Yes | | Updated Timeline: ABIS Timeline - Wed 9/12/18
Wed 9/12/18 10:06 AM - E3 support received email from Robert Golasky referencing to intermittent timeouts trying to connect to DOD. Per Data Power, a mixture of HTTP 200, 401, and 404 responses has been identified with 86% of the transactions having errors in the past hour.
Wed 9/12/18 10:07 AM – Team lead Brandon long from e3 support reply the email to add SCM support Lars to look at it
Wed 9/12/18 11:30AM – E3 support request to have a bridge call to Duty officer
Wed 9/12/18 11:30 AM - Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics sent out
Wed 9/12/18 11:57 AM – update 1: Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics -S/S_E (ABIS): 74 S/S_E (IDENT): 58
Wed 9/12/18 12:04 pm – Shalini Wilfred e3 project manager joining the bridge call
Wed 9/12/18 12:15 PM – Data power engineer Jason Ogden confirmed still no connection to MQ Chanel
Wed 9/12/18 12: 20 pm – e3 support reached out to DOD ABIS spoke with Michael Gibson, he stated DHS CBP account is connected to ABIS SFTP server and uploading files.
Wed 9/12/18 12:23 pm – Jose from e3 project manager joining the bridge call
Wed 9/12/18 12:23 pm – Michael manager from joined the bridge call
Wed 9/12/18 12: 40 pm - Jose from e3 request to DFBA ABIS team join the bridge call
Wed 9/12/18 12:51 PM - Michael Gibson stated he needs a formal documentation request to have join the bridge call
Wed 9/12/18 12:56 pm – Michael Gibson Joined the bridge
Wed 9/12/18 1:15 PM - Michael Gibson from DBA ABIS joined the bridge call
Wed 9/12/18 1:16 PM - Michael Gibson confirmed the connection has successful to DHS but they have seen this issue last year multiple files stuck on the resubmission but he will confirm DFBA to confirm if the transaction that was provided is still showing as stuck
Wed 9/12/18 1:20 PM – Brandon from e3 ask to Michael Gibson if they have switched over their server and Michael stated they try to switch but they it failed but still they are on primarily server
Wed 9/12/18 1:22 PM – Data power engineer stated they still seen a timeout every minutes 15
Wed 9/12/18 1:45pm – Brandon from e3 didn’t want to have the bridge down denied an hourly update.
Wed 9/12/18 1:48pm - Michael Gibson gave Brandon the transactions that are struck in a loop
Wed 9/12/18 1:55pm - bob aswaygo (missed spelled) branch chief from the DISA
Wed 9/12/18 1:55pm - Michael Gibson said the network from the DFGS outside from their network issue
Wed 9/12/18 1:59pm - Michael Gibson has reaching out to his team to confirm network issues.
Wed 9/12/18 2:05pm: e3 support requested the DO to have EDME LAN and CBP NOC
Wed 9/12/18 2:07pm - Michael Gibson has a network call with the DISA engineers to determine if there is a network issue with DISA and SFTP servers
Wed 9/12/18 2:10pm - Tran, Viet from EDME LAN got on the call requesting the IP and source IP destination
Wed 9/12/18 2:10pm - waiting for the source destination IP Address to be given from the DMZ
Wed 9/12/18 2:21pm - Filbert from the CBP NOC joined
Wed 9/12/18 2:09 PM – Duty officer added on the email to CBP NOC and EDMED LAN to join the bridge
Wed 9/12/18 2:33 PM – Justin from DOD stated they lost the entire power of IP NOC fibers node on DOD commercial and ticket # 2172898 has been created on DISA
Wed 9/12/18 2:37 PM – Michael Gibson Dropped the call and provide his number# 304- 326-3156 if we need to reach out to him
Wed 9/12/18 2:44 PM – SCM team Lars restart the ABIS Service
Wed 9/12/18 2:44 PM – Hussein e3 developer sent out email to EID team with high priority request- BBER-562 Clear ABIS looping transactions
Wed 9/12/18 2:56 PM – EDME LAN engineer confirmed sources is outside of EDME LAN and dropped of the call
Wed 9/12/18 2:59 PM – Filbert from CBP NOC is confirmed the traffic going to DOD is operation normal
Wed 9/12/18 3:00 PM –Duty officer ask if we need to reach out ONE NET and Mohammed Shareq from major Incident manager (MIM)
Wed 9/12/18 3:03 PM – John Leabhart from Data Power dropped the call
Wed 9/12/18 3:06 PM – Anny from C2 joining the call
Wed 9/12/18 3:10 PM – Mohammed Shareq reached out to One Net also created a ticket
Wed 9/12/18 3:30 PM – EDME LAN engineer dropped the call
Wed 9/12/18 3:40pm – Jason from DSIS asked for an update
Wed 9/12/18 3:48pm – Annie C2 and Jason DSIS dropped
Wed 9/12/18 3:50pm – Arthur Lender from DHS OneNET
Wed 9/12/18 3:52pm – John from DMZ provided Arthur Lender from DHS OneNET the IP address and Source Destination
Wed 9/12/18 4:04pm – CBP NOC, DHS OneNET - confirms that traffic is flowing to DoD ABIS, but the traffic is having intermittent connectivity
Wed 9/12/18 4:06pm – John from DMZ hasn’t seen errors for sometime
Wed 9/12/18 4:13pm – ABIS Transaction continues to grow
Wed 9/12/18 4:14pm – Nicole from DoD ABIS reports there is a network latency
Wed 9/12/18 4:17pm – e3 support request hourly updates from Nicole DoD ABIS
Wed 9/12/18 4:22pm – e3 support provided updates to the bridge and waiting government approval for ending the bridge and for hourly updates.
Wed 9/12/18 4:24pm – approval received from Jose Biometrics Operation Manager to end, but the issue not resolved. Bridge call ended
Wed 9/12/18 4:35pm – update will come from calling 1-888-HELP DISA | | | (DoD ABIS) Situational awareness affecting e3 biometrics - CBP ticket#11482695 | | DoD ABIS | Resolved: As of 4:53 PM e3 support has confirmed that ABIS transactions are processing in real time. Initially e3 support received notification from the Middleware team that DataPower reported seeing intermittent timeouts trying to connect to DoD. Data power reported seeing a mixture of HTTP 200, 401, and 404 responses, and that 86% of the transactions were receiving errors in the past hour. After extensive troubleshooting efforts EDME LAN, CBP NOC, and DHS ONET were able to confirm utilizing the source destination address IP’s that there were no network connectivity issue between DataPower and DoD. Due to the time outs DataPower continued to see, Michael Gibson from DFBA reached out to engineers that were trouble shooting a network issue between DISA and DoD. Michael Gibson provided DISA Ticket# 217898 associated to a node being down causing traffic to route to different directions. E3 Support received notification on 9/13/2018 at 10:26 AM for an unplanned outage from DoD ABIS that they were unable to process Biometrics transactions, which was resolved at 12:29 PM. Michael Gibson with DFBA reported that they were able to successfully see inbound and out boundtransmissions from the DHS CBP account to the DoD server. Alist of transactions that were impacted during this time were sent over to Michael Gibson.Michael Gibson advised that the transmitted files did in fact make it to their server, and that he would take a closer look into the transaction in question to ensure that the files were completely transmitted. Once e3 support receives confirmation from Michael Gibson in regards to the transactions in question a decision on how to clear the remaining transactions will be made. | Undetermined, but possible network connecitivity with DISA and DoD | DoD ABIS | | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. Agents are not required to hold subjects until ABIS returns online. The highest level supervisor at the station will be the final deciding official on the final detention disposition. | N/A | DoD ABIS | N/A | e3 Biometrics, while users were receiving intermittent responses for ABIS | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 9/6/18 06:00 | 9/6/18 17:30 | 11:30 | NA | | Thu 9/6/2018 6:08 AM e3 support received and email from the Technology Operations Center (TOC) reporting AO KEY TEST DOWN FOR ABIS. At 6:35 AM DoD ABIS sent an email stating that the DoD Biometric Enterprise was in a degraded state and submission response times may be affected. E3 Support reached out to ABIS Watch Desk in which DoD ABIS acknowledged that the issue was with their sFTP server. ABIS engineers were onsite troubleshooting the issue. At 9:32 AM ABIS sent an email stating that the DoD Biometric Enterprise was now able to process transactions. Upon checking the backlog of ABIS transactions e3 support noticed that backlog continued to increase. A bridge call was established with ABIS engineers and the DMZ group after e3 Developers noticed http 400 unauthorized errors. DMZ group joined the bridge call and ruled out any issues on their end. The bridge call ended with the plan of action for ABIS engineers to check their system and report back with hourly updates. At 1:52 PM Marisa Collins with the DoD ABIS group confirmed they were seeing responses for ABIS being returned. Marisa confirmed the root cause was due to the /Responses folder missing. Marisa created the folder, and gave it appropriate permissions. Once that was complete, ABIS confirmed submissions for E3. | | OBP;#OFO | | | 9/6/18 06:35 | DOD ABIS | 11458352 | Yes | NA | Thu 9/6/2018 6:08 AM – TOC sent email about alerts for the following key tests are down: BEMSD_PROD_ABIS_https_ESB-PRD-DP6801575-OUTSIDE1.cbp.dhs.gov ao_rep_cnt:2
Thu 9/6/2018 8:20 AM - http 400 errors Nikhil unauthorized
Thu 9/6/2018 9:32 AM - The DoD Biometric Enterprise is now able to process transactions.
Thu 9/6/2018 9:35 AM - Update 2: Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics - S/S_E(ABIS): 392
Thu 9/6/2018 9:59 AM - BEMSD-E3-AB -ISService :: CPU Utilization is greater than 90.0. Observed value = 93.0
Thu 9/6/2018 10:28 AM - E3 support sent email; Investigating BEMSD-E3-ABISService :: CPU Utilization is greater than 90.0. Observed value = 93.0
Thu 9/6/2018 10:34 AM - E3 Support sent email to SCM team requesting for someone to check alert
Thu 9/6/2018 10:40 AM- Bhasker Pagadala SCM confirmed ABIS servers look ok for the following alert BEMSD-E3-ABISService :: CPU Utilization is greater than 90.0. Observed value = 93.0
Thu 9/6/2018 10:46 AM - CHENG, JIANYU SCM advised there was a spike around 10am, but things are back to normal
Thu 9/6/2018 10:49 AM – E3 support sent email to TOC reporting that there was a spike around the time the alert was reported, but since then ABIS services has stabilized
Thu 9/6/2018 11:53 AM - Update 3: Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics - S/S_E(ABIS): 581
Thu 9/6/2018 12:02 PM – E3 support provided bridge call info
Thu 9/6/2018 12:17 PM – Duty Officer added NOC to email distro
Thu 9/6/2018 12:44 PM – Jose Villafane sent email thread for your hourly updates on the HTTP 401 Unauthorized Error.
Thu 9/6/2018 12:51 PM - Update 5: Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics - S/S_E(ABIS): 664
Thu 9/6/2018 12:58 PM - DoD ABIS Marisa Collins sent email with the following findings.
• After reviewing logs, we do not see any attempts to negotiate a full connection from the CBP account.
• We tested logging in with the account, and the server correctly logged the failed attempts.
• It may be possible that the SSL handshake is not completing successfully from the E3 side.
• We have restarted all pertinent services on our side.
• As a side note, we have other agencies successfully connecting via http/https.
• At this time, it seems possible that a problem specific to the CBP account/connection may be occurring.
• If there are any processes, workflows, or interfaces which can be restarted on the E3 side, please do so at this point, as this will help with our troubleshooting efforts.
Thu 9/6/2018 1:02 PM – Jose Villafane sent email to SCM team to restart our ABIS cluster
Thu 9/6/2018 1:07 PM - CHENG, JIANYU SCM begin rolling restart
Thu 9/6/2018 1:14 PM - Restart completed: e3ABIS_bemms-p0028_ms1 / e3ABIS_bemms-p0029_ms1 / e3ABIS_bemms-p0030_ms1
Thu 9/6/2018 1:28 PM – E3 Developers confirmed e3 services for ABIS were restarted, but engineers continued to see 401 as status being returned on all three e3 servers. Developers requested Datapower team see if they need to restart channel? Developers advised that they were seeing 500 error until 8:20 and then it switched to this 401 error.
• 2018-09-06 13:18:58,677 INFO putFileToCBPServerHttps: Response status code: 401 for tid CBSTN065090618111836 - [[ACTIVE] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'](ABISHttpService.java:96)
Thu 9/6/2018 1:28 PM – Jose sent email to e3 Development asking if ABIS request were getting processed
Thu 9/6/2018 1:32 PM – Alston Corrie from Datapower advised that there were no channels for Datapower to restart
Thu 9/6/2018 1:39 PM - Alston Corrie advised Datapower is still seeing the http 401 when we are attempting to connect to DoD.
• 1:27:33 PM mpgw information 111389719 10.5.64.60 0x80e0012d mpgw (E2BioRequest): Using Backside Server:https://214.25.86.150:443/api/v1.1/files/Responses/CBRGC011090618102548_ABIS.xml
• 1:27:33 PM mpgw information 111389719 10.5.64.60 0x80e0015b mpgw (E2BioRequest): HTTP response code 401 for 'https://214.25.86.150:443/api/v1.1/files/Responses/CBRGC011090618102548_ABIS.xml'
• 1:27:33 PM mpgw debug 111389719 10.5.64.60 0x80e00159 mpgw (E2BioRequest): Outbound HTTP with reused TCP session using HTTP/1.1 tohttps://214.25.86.150:443/api/v1.1/files/Responses/CBRGC011090618102548_ABIS.xml
• 1:27:33 PM mpgw debug 111389719 10.5.64.60 0x80e00536 mpgw (E2BioRequest): HTTP Header-Retention:Compression Policy: Off, URL:
Thu 9/6/2018 1:44 PM – Jose Villafane sent email to DFBA Team asking when did failover to your backup servers occur?
Thu 9/6/2018 1:46 PM - Update 6: Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics- S/S_E(ABIS): 705
Thu 9/6/2018 1:52 PM - DoD ABIS Marisa Collins sent email saying they see files coming in now.
Thu 9/6/2018 1:57 PM – Corrie Alston sent email saying Datapower, just saw a HTTP 200
• HTTP response code 200 for 'https://214.25.86.150:443/api/v1.1/files/Submissions'
Thu 9/6/2018 2:08 PM – e3 Developer Nikhil sent email confirming he would check the logs
Thu 9/6/2018 2:13 PM – e3 Developer Nikhil confirmed that they were seeing HTTP 200 and that the 401 issue is resolved
Thu 9/6/2018 2:30 PM – Jose Villafane sent email asking Marissa Collins what changed? Did DFBA cut back over to the primary ABIS servers? I know there was a job/process you all were waiting to complete on your primary ABIS servers? We need to get down to the root cause of this issue.
Thu 9/6/2018 2:48 PM – E3 support received first email from the field for ABIS Transactions Processing with no Returns | | | (DoD ABIS) Situational awareness affecting e3 biometrics | | DoD ABIS | DoD ABIS back up server was missing a Responses folderNicole Collins with ABIS created the folder, and gave it appropriate permissions. Once this was completed, submissions started coming in from E3. | DoD ABIS back up server was missing a Responses folder | DOD ABIS | Data power, DOD ABIS, Duty Officer | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. Agents are not required to hold subjects until ABIS returns online. The highest level supervisor at the station will be the final deciding official on the final detention disposition. | NA | DOD ABIS | NA | Application was available user where not recieving returns | NA | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 9/6/18 06:00 | 9/6/18 17:30 | 11:30 | NA | | Thu 9/6/2018 6:08 AM e3 support received and email from the Technology Operations Center (TOC) reporting AO KEY TEST DOWN FOR ABIS. At 6:35 AM DoD ABIS sent an email stating that the DoD Biometric Enterprise was in a degraded state and submission response times may be affected. E3 Support reached out to ABIS Watch Desk in which DoD ABIS acknowledged that the issue was with their sFTP server. ABIS engineers were onsite troubleshooting the issue. At 9:32 AM ABIS sent an email stating that the DoD Biometric Enterprise was now able to process transactions. Upon checking the backlog of ABIS transactions e3 support noticed that backlog continued to increase. A bridge call was established with ABIS engineers and the DMZ group after e3 Developers noticed http 400 unauthorized errors. DMZ group joined the bridge call and ruled out any issues on their end. The bridge call ended with the plan of action for ABIS engineers to check their system and report back with hourly updates. At 1:52 PM Marisa Collins with the DoD ABIS group confirmed they were seeing responses for ABIS being returned. Marisa confirmed the root cause was due to the /Responses folder missing. Marisa created the folder, and gave it appropriate permissions. Once that was complete, ABIS confirmed submissions for E3. | | OBP;#OFO | | | 9/6/18 06:35 | DOD ABIS | 11458352 | Yes | NA | Thu 9/6/2018 6:08 AM – TOC sent email about alerts for the following key tests are down: BEMSD_PROD_ABIS_https_ESB-PRD-DP6801575-OUTSIDE1.cbp.dhs.gov ao_rep_cnt:2
Thu 9/6/2018 8:20 AM - http 400 errors Nikhil unauthorized
Thu 9/6/2018 9:32 AM - The DoD Biometric Enterprise is now able to process transactions.
Thu 9/6/2018 9:35 AM - Update 2: Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics - S/S_E(ABIS): 392
Thu 9/6/2018 9:59 AM - BEMSD-E3-AB -ISService :: CPU Utilization is greater than 90.0. Observed value = 93.0
Thu 9/6/2018 10:28 AM - E3 support sent email; Investigating BEMSD-E3-ABISService :: CPU Utilization is greater than 90.0. Observed value = 93.0
Thu 9/6/2018 10:34 AM - E3 Support sent email to SCM team requesting for someone to check alert
Thu 9/6/2018 10:40 AM- Bhasker Pagadala SCM confirmed ABIS servers look ok for the following alert BEMSD-E3-ABISService :: CPU Utilization is greater than 90.0. Observed value = 93.0
Thu 9/6/2018 10:46 AM - CHENG, JIANYU SCM advised there was a spike around 10am, but things are back to normal
Thu 9/6/2018 10:49 AM – E3 support sent email to TOC reporting that there was a spike around the time the alert was reported, but since then ABIS services has stabilized
Thu 9/6/2018 11:53 AM - Update 3: Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics - S/S_E(ABIS): 581
Thu 9/6/2018 12:02 PM – E3 support provided bridge call info
Thu 9/6/2018 12:17 PM – Duty Officer added NOC to email distro
Thu 9/6/2018 12:44 PM – Jose Villafane sent email thread for your hourly updates on the HTTP 401 Unauthorized Error.
Thu 9/6/2018 12:51 PM - Update 5: Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics - S/S_E(ABIS): 664
Thu 9/6/2018 12:58 PM - DoD ABIS Marisa Collins sent email with the following findings.
• After reviewing logs, we do not see any attempts to negotiate a full connection from the CBP account.
• We tested logging in with the account, and the server correctly logged the failed attempts.
• It may be possible that the SSL handshake is not completing successfully from the E3 side.
• We have restarted all pertinent services on our side.
• As a side note, we have other agencies successfully connecting via http/https.
• At this time, it seems possible that a problem specific to the CBP account/connection may be occurring.
• If there are any processes, workflows, or interfaces which can be restarted on the E3 side, please do so at this point, as this will help with our troubleshooting efforts.
Thu 9/6/2018 1:02 PM – Jose Villafane sent email to SCM team to restart our ABIS cluster
Thu 9/6/2018 1:07 PM - CHENG, JIANYU SCM begin rolling restart
Thu 9/6/2018 1:14 PM - Restart completed: e3ABIS_bemms-p0028_ms1 / e3ABIS_bemms-p0029_ms1 / e3ABIS_bemms-p0030_ms1
Thu 9/6/2018 1:28 PM – E3 Developers confirmed e3 services for ABIS were restarted, but engineers continued to see 401 as status being returned on all three e3 servers. Developers requested Datapower team see if they need to restart channel? Developers advised that they were seeing 500 error until 8:20 and then it switched to this 401 error.
• 2018-09-06 13:18:58,677 INFO putFileToCBPServerHttps: Response status code: 401 for tid CBSTN065090618111836 - [[ACTIVE] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'](ABISHttpService.java:96)
Thu 9/6/2018 1:28 PM – Jose sent email to e3 Development asking if ABIS request were getting processed
Thu 9/6/2018 1:32 PM – Alston Corrie from Datapower advised that there were no channels for Datapower to restart
Thu 9/6/2018 1:39 PM - Alston Corrie advised Datapower is still seeing the http 401 when we are attempting to connect to DoD.
• 1:27:33 PM mpgw information 111389719 10.5.64.60 0x80e0012d mpgw (E2BioRequest): Using Backside Server:https://214.25.86.150:443/api/v1.1/files/Responses/CBRGC011090618102548_ABIS.xml
• 1:27:33 PM mpgw information 111389719 10.5.64.60 0x80e0015b mpgw (E2BioRequest): HTTP response code 401 for 'https://214.25.86.150:443/api/v1.1/files/Responses/CBRGC011090618102548_ABIS.xml'
• 1:27:33 PM mpgw debug 111389719 10.5.64.60 0x80e00159 mpgw (E2BioRequest): Outbound HTTP with reused TCP session using HTTP/1.1 tohttps://214.25.86.150:443/api/v1.1/files/Responses/CBRGC011090618102548_ABIS.xml
• 1:27:33 PM mpgw debug 111389719 10.5.64.60 0x80e00536 mpgw (E2BioRequest): HTTP Header-Retention:Compression Policy: Off, URL:
Thu 9/6/2018 1:44 PM – Jose Villafane sent email to DFBA Team asking when did failover to your backup servers occur?
Thu 9/6/2018 1:46 PM - Update 6: Situational Awareness: (DoD ABIS) Situational awareness affecting e3 biometrics- S/S_E(ABIS): 705
Thu 9/6/2018 1:52 PM - DoD ABIS Marisa Collins sent email saying they see files coming in now.
Thu 9/6/2018 1:57 PM – Corrie Alston sent email saying Datapower, just saw a HTTP 200
• HTTP response code 200 for 'https://214.25.86.150:443/api/v1.1/files/Submissions'
Thu 9/6/2018 2:08 PM – e3 Developer Nikhil sent email confirming he would check the logs
Thu 9/6/2018 2:13 PM – e3 Developer Nikhil confirmed that they were seeing HTTP 200 and that the 401 issue is resolved
Thu 9/6/2018 2:30 PM – Jose Villafane sent email asking Marissa Collins what changed? Did DFBA cut back over to the primary ABIS servers? I know there was a job/process you all were waiting to complete on your primary ABIS servers? We need to get down to the root cause of this issue.
Thu 9/6/2018 2:48 PM – E3 support received first email from the field for ABIS Transactions Processing with no Returns | | | (DoD ABIS) Situational awareness affecting e3 biometrics | | DoD ABIS | DoD ABIS back up server was missing a Responses folderNicole Collins with ABIS created the folder, and gave it appropriate permissions. Once this was completed, submissions started coming in from E3. | DoD ABIS back up server was missing a Responses folder | DOD ABIS | Data power, DOD ABIS, Duty Officer | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. Agents are not required to hold subjects until ABIS returns online. The highest level supervisor at the station will be the final deciding official on the final detention disposition. | NA | DOD ABIS | NA | Application was available user where not recieving returns | NA | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 9/5/18 19:35 | 9/6/18 05:55 | 10:20 | N/A | | Incident Description and Impact Statement: E3 Support received multiple emails from multiple sites and a direct phone call stating that they were receiving the error "failed to submit transaction from terminal" whenever users attempted to submit a Search & Enroll on a subject. | | OBP;#OFO | | | 9/6/18 00:05 | Direct Call from Marathon Fla USBP site. 2nd Notification By TSD at 3:20am | 11458099 | Yes | N/A | Time line For error Failed to Submit Transaction Terminal 9/5/2018
8:07PM First email was received with the error ”Failed To Submit Transaction” from terminal.
9/6/2018
12:06AM Direct call to e3 Support from Marathon Fla, USBP site stating the error & the fact they could not submit subjects.
3:23AM The JAAC sends email notice that multiple sites are having e3 issues
3:47AM TSD notifies e3 Support that Multiple sites are experiencing issues submitting “Search & Enroll” (S_E) transactions
4:15AM Duty Officers spun up bridge call with OBIM & ICE DBA’s.
4:17 AM e3 support sent out Situation awareness, reached out e3 developer Nikhil and left a voicemail to Wes
4:25 AM e3 support join the call
4:30AM e3 support Nielab join the call
4:33 AM e3 developer Nikhil join the call
4:43 AM Shalini (e3 PM) joins the call
4:48 AM Upon checking the database Nikhil stated we were not getting NGI and IDENT for the last 9 hour submissions the last transaction that we received it was at 21:27pm
5:00AM Lars join the call
5:03AM Nikhil stated we are not getting enough table space to add, if Duty officer can reach out ICE Production DBA needs to join the call
5:04AM Nikhil suggest if we can put site down page, MaryAnn approve for site down page
5:08 AM CBP NOC engineer join the call and stated he checked and it’s up and running on their end
5:12 AM ICE DBA Join the call
5:45 AM Jose was asking to Nikhil if we running out of space how comes every 10 min report that comes showing less transaction since we did the site down page comparing before and Nikhil will investigate
5:45 AM ICE DBA confirm table space has been added
5: 50 AM e3 Support reports that additional table space has been extended, the E3 Biometric Application was bounced, and confirmation has been received
5:55AM E3 support is reaching out the site to confirm
5:58AM Removed the site down page | | | e3 Biometrics error: Failure To Submit Search & Enroll From Terminal | | ICE/EID | Additional table space has been extended. ICE DBA team allocated three additional data files to tablespace ENF_DATA1_LOB_DATA1. Per ICE DBA Valeriy Voyts each new datafile has 3gb initial size and is auto extendable up to 32GB each. The E3 Biometric Application was bounced, site down pages was removed and e3 Support has received confirmation from the field that users are now able to submit search/enrollment, and bookings transactions successfully. | DB table ran out of space | ICE DBA | OBIM, JAAC, ICE DBA's SCM | Users were unable to submit search & enrolls of subjects. | No | ICE DBA's | DBA Table space will be automatically added | e3 Application was up & running, Users wer unable to submit search enrolls | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 9/5/18 19:35 | 9/6/18 05:55 | 10:20 | N/A | | Incident Description and Impact Statement: E3 Support received multiple emails from multiple sites and a direct phone call stating that they were receiving the error "failed to submit transaction from terminal" whenever users attempted to submit a Search & Enroll on a subject. | | OBP;#OFO | | | 9/6/18 00:05 | Direct Call from Marathon Fla USBP site. 2nd Notification By TSD at 3:20am | 11458099 | Yes | N/A | Time line For error Failed to Submit Transaction Terminal 9/5/2018
8:07PM First email was received with the error ”Failed To Submit Transaction” from terminal.
9/6/2018
12:06AM Direct call to e3 Support from Marathon Fla, USBP site stating the error & the fact they could not submit subjects.
3:23AM The JAAC sends email notice that multiple sites are having e3 issues
3:47AM TSD notifies e3 Support that Multiple sites are experiencing issues submitting “Search & Enroll” (S_E) transactions
4:15AM Duty Officers spun up bridge call with OBIM & ICE DBA’s.
4:17 AM e3 support sent out Situation awareness, reached out e3 developer Nikhil and left a voicemail to Wes
4:25 AM e3 support join the call
4:30AM e3 support Nielab join the call
4:33 AM e3 developer Nikhil join the call
4:43 AM Shalini (e3 PM) joins the call
4:48 AM Upon checking the database Nikhil stated we were not getting NGI and IDENT for the last 9 hour submissions the last transaction that we received it was at 21:27pm
5:00AM Lars join the call
5:03AM Nikhil stated we are not getting enough table space to add, if Duty officer can reach out ICE Production DBA needs to join the call
5:04AM Nikhil suggest if we can put site down page, MaryAnn approve for site down page
5:08 AM CBP NOC engineer join the call and stated he checked and it’s up and running on their end
5:12 AM ICE DBA Join the call
5:45 AM Jose was asking to Nikhil if we running out of space how comes every 10 min report that comes showing less transaction since we did the site down page comparing before and Nikhil will investigate
5:45 AM ICE DBA confirm table space has been added
5: 50 AM e3 Support reports that additional table space has been extended, the E3 Biometric Application was bounced, and confirmation has been received
5:55AM E3 support is reaching out the site to confirm
5:58AM Removed the site down page | | | e3 Biometrics error: Failure To Submit Search & Enroll From Terminal | | ICE/EID | Additional table space has been extended. ICE DBA team allocated three additional data files to tablespace ENF_DATA1_LOB_DATA1. Per ICE DBA Valeriy Voyts each new datafile has 3gb initial size and is auto extendable up to 32GB each. The E3 Biometric Application was bounced, site down pages was removed and e3 Support has received confirmation from the field that users are now able to submit search/enrollment, and bookings transactions successfully. | DB table ran out of space | ICE DBA | OBIM, JAAC, ICE DBA's SCM | Users were unable to submit search & enrolls of subjects. | No | ICE DBA's | DBA Table space will be automatically added | e3 Application was up & running, Users wer unable to submit search enrolls | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 9/5/18 19:35 | 9/6/18 05:55 | 10:20 | N/A | | Incident Description and Impact Statement: E3 Support received multiple emails from multiple sites and a direct phone call stating that they were receiving the error "failed to submit transaction from terminal" whenever users attempted to submit a Search & Enroll on a subject. | | OBP;#OFO | | | 9/6/18 00:05 | Direct Call from Marathon Fla USBP site. 2nd Notification By TSD at 3:20am | 11458099 | Yes | N/A | Time line For error Failed to Submit Transaction Terminal 9/5/2018
8:07PM First email was received with the error ”Failed To Submit Transaction” from terminal.
9/6/2018
12:06AM Direct call to e3 Support from Marathon Fla, USBP site stating the error & the fact they could not submit subjects.
3:23AM The JAAC sends email notice that multiple sites are having e3 issues
3:47AM TSD notifies e3 Support that Multiple sites are experiencing issues submitting “Search & Enroll” (S_E) transactions
4:15AM Duty Officers spun up bridge call with OBIM & ICE DBA’s.
4:17 AM e3 support sent out Situation awareness, reached out e3 developer Nikhil and left a voicemail to Wes
4:25 AM e3 support join the call
4:30AM e3 support Nielab join the call
4:33 AM e3 developer Nikhil join the call
4:43 AM Shalini (e3 PM) joins the call
4:48 AM Upon checking the database Nikhil stated we were not getting NGI and IDENT for the last 9 hour submissions the last transaction that we received it was at 21:27pm
5:00AM Lars join the call
5:03AM Nikhil stated we are not getting enough table space to add, if Duty officer can reach out ICE Production DBA needs to join the call
5:04AM Nikhil suggest if we can put site down page, MaryAnn approve for site down page
5:08 AM CBP NOC engineer join the call and stated he checked and it’s up and running on their end
5:12 AM ICE DBA Join the call
5:45 AM Jose was asking to Nikhil if we running out of space how comes every 10 min report that comes showing less transaction since we did the site down page comparing before and Nikhil will investigate
5:45 AM ICE DBA confirm table space has been added
5: 50 AM e3 Support reports that additional table space has been extended, the E3 Biometric Application was bounced, and confirmation has been received
5:55AM E3 support is reaching out the site to confirm
5:58AM Removed the site down page | | | e3 Biometrics error: Failure To Submit Search & Enroll From Terminal | | ICE/EID | Additional table space has been extended. ICE DBA team allocated three additional data files to tablespace ENF_DATA1_LOB_DATA1. Per ICE DBA Valeriy Voyts each new datafile has 3gb initial size and is auto extendable up to 32GB each. The E3 Biometric Application was bounced, site down pages was removed and e3 Support has received confirmation from the field that users are now able to submit search/enrollment, and bookings transactions successfully. | DB table ran out of space | ICE DBA | OBIM, JAAC, ICE DBA's SCM | Users were unable to submit search & enrolls of subjects. | No | ICE DBA's | DBA Table space will be automatically added | e3 Application was up & running, Users wer unable to submit search enrolls | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 8/24/18 00:00 | 8/24/18 13:45 | 13:45 | N/A | | EID Connectivity/ slowness impacting e3 applications | | OBP;#OFO | | | 8/27/18 13:00 | reconveined bridge call from prior day | N/A | Yes | N/A | 8/24/2018 Bridge Call Starts
1:07 PM - Plan of action from yesterday was to add CPU to the database Mark from ICE stated that some research needs to be done. He stated that his team will Pump up add the CPU for one time if issue is not resolved then Oracle will be involved as to figure out what is causing the spike in CPU.
1:10 PM - ICE DBA Mark H Scott continues to discuss options to add more CPUs on the ICE side
1:10 PM - Wes stated that CPU usage charge/graph shows spike but at the same time the e3 application also shows spike at the same time. Something has changed in the past 3 months. Wes stated not sure what has changed that has been causing the latency issue during normal business hours. The date and time range for the performance issue was sent over to ICE. ICE DBA is looking for set of date and time range to compare. Per Wes the spike in our application went from 5 second to 11. Wes asked EID team, what is the time table to add the CPU? Mark stated that they will need Stake holder’s approval to bounce the system once CPU is added the configuration change needs approval to make the change. They need to submit ECR and explain the issue and if it meets criteria.
1:16 PM - Wes stated that it’s a level of emergency for BEMS and latency is impacting our users.
1:18 PM - Project Manager Vidhya Dandi determine what measured are need to get an approval for an outage to add more CPU usage
1:20 PM – ICE DBA representative Fongu stated that EID will take 2 hour outage to implement the CPU, Wes stated if weekend will be good to add the CPU. Vidya from ICE stated that she will start the process of ECR today so it can be implemented for the weekend.
1:25 PM – ICE DBA Fongu stated we need to Make sure we don’t kill the weekly backup. Per Vidya the backup starts at 8:30pm Friday and runs through Saturday 4:30am
1:27 PM – Managers and engineers discuss possible date and times to perform the required maintenance to address the CPU usage
1:30 PM - Project Manager Vidhya agreed to e3 Program Manager’s suggestion for 9:00 AM Sunday, scheduled time to implement an ECR for a rolling restart of the EID services to address the CPU utilization
1:30 PM – Vidya stated that they will get stake holders approval for adding CPU to the database on Sunday at 9:00am. It will not require an outage it will be rolling restart but users may see some performance degradation during that time.
1:35 PM – e3 Developer Andrew Smalera provides detailed findings for e3 core application latency issues across the board. AppDynamics shows spikes on how it measures the applications response time. Certain servers are getting hung with thread blocks resulting in a restart. Typically ranges during business hours starting around 8:00 and or 10:00 am in the morning. Multiple e3 applications connecting to EID, all applications showing similar behavior for spikes.
1:40 PM – Question was asked by ICE engineer if e3 application uses other database other than EID?
1:44 PM – ICE DBA’s are requesting e3 queries coming into the EID to review for an 1hour on August 22nd
1:41 PM – Andrew stated that e3 application uses configuration database which is consist of only one table
1:44 PM – Andrew asked to look at queries that run from the account to check and see which queries taking long time to execute which may be causing the latency issue
1:46 PM – Fongu asked to provide some queries to show the exact time the performance issue happens for example when the application response time spikes up. Andrew stated that he will provide some of the queries that shows date and time when performance latency occurred.
1:48 PM – There were no more questions from the technical team and bridge called ended. | | | second occurance EID Connectivity/ slowness impacting e3 applications | | ICE/EID | Upgrade CPU utilization | CPU utilization | | EDME LAN, OneNet, NOC, ICE DBA | slowness impacting e3 applications | N/A | ICE DBA | N/A | applications were available but experienced slowness | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 8/24/18 00:00 | 8/24/18 13:45 | 13:45 | N/A | | EID Connectivity/ slowness impacting e3 applications | | OBP;#OFO | | | 8/27/18 13:00 | reconveined bridge call from prior day | N/A | Yes | N/A | 8/24/2018 Bridge Call Starts
1:07 PM - Plan of action from yesterday was to add CPU to the database Mark from ICE stated that some research needs to be done. He stated that his team will Pump up add the CPU for one time if issue is not resolved then Oracle will be involved as to figure out what is causing the spike in CPU.
1:10 PM - ICE DBA Mark H Scott continues to discuss options to add more CPUs on the ICE side
1:10 PM - Wes stated that CPU usage charge/graph shows spike but at the same time the e3 application also shows spike at the same time. Something has changed in the past 3 months. Wes stated not sure what has changed that has been causing the latency issue during normal business hours. The date and time range for the performance issue was sent over to ICE. ICE DBA is looking for set of date and time range to compare. Per Wes the spike in our application went from 5 second to 11. Wes asked EID team, what is the time table to add the CPU? Mark stated that they will need Stake holder’s approval to bounce the system once CPU is added the configuration change needs approval to make the change. They need to submit ECR and explain the issue and if it meets criteria.
1:16 PM - Wes stated that it’s a level of emergency for BEMS and latency is impacting our users.
1:18 PM - Project Manager Vidhya Dandi determine what measured are need to get an approval for an outage to add more CPU usage
1:20 PM – ICE DBA representative Fongu stated that EID will take 2 hour outage to implement the CPU, Wes stated if weekend will be good to add the CPU. Vidya from ICE stated that she will start the process of ECR today so it can be implemented for the weekend.
1:25 PM – ICE DBA Fongu stated we need to Make sure we don’t kill the weekly backup. Per Vidya the backup starts at 8:30pm Friday and runs through Saturday 4:30am
1:27 PM – Managers and engineers discuss possible date and times to perform the required maintenance to address the CPU usage
1:30 PM - Project Manager Vidhya agreed to e3 Program Manager’s suggestion for 9:00 AM Sunday, scheduled time to implement an ECR for a rolling restart of the EID services to address the CPU utilization
1:30 PM – Vidya stated that they will get stake holders approval for adding CPU to the database on Sunday at 9:00am. It will not require an outage it will be rolling restart but users may see some performance degradation during that time.
1:35 PM – e3 Developer Andrew Smalera provides detailed findings for e3 core application latency issues across the board. AppDynamics shows spikes on how it measures the applications response time. Certain servers are getting hung with thread blocks resulting in a restart. Typically ranges during business hours starting around 8:00 and or 10:00 am in the morning. Multiple e3 applications connecting to EID, all applications showing similar behavior for spikes.
1:40 PM – Question was asked by ICE engineer if e3 application uses other database other than EID?
1:44 PM – ICE DBA’s are requesting e3 queries coming into the EID to review for an 1hour on August 22nd
1:41 PM – Andrew stated that e3 application uses configuration database which is consist of only one table
1:44 PM – Andrew asked to look at queries that run from the account to check and see which queries taking long time to execute which may be causing the latency issue
1:46 PM – Fongu asked to provide some queries to show the exact time the performance issue happens for example when the application response time spikes up. Andrew stated that he will provide some of the queries that shows date and time when performance latency occurred.
1:48 PM – There were no more questions from the technical team and bridge called ended. | | | second occurance EID Connectivity/ slowness impacting e3 applications | | ICE/EID | Upgrade CPU utilization | CPU utilization | | EDME LAN, OneNet, NOC, ICE DBA | slowness impacting e3 applications | N/A | ICE DBA | N/A | applications were available but experienced slowness | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Detentions | 8/23/18 11:45 | 8/23/18 16:00 | 4:15 | N/A | |
Throughout the overnight hours, starting at 22:13 PM yesterday evening, the TOC has observed some very sporadic / random failures for the E3_EID application. | | OBP;#OFO | | | 8/23/18 11:45 | Technology Operations Center | N/A | Yes | N/A | Thursday 8/23/2018
11:44 AM – EDM group advised that one of the subnets rerouted traffic 35 mins ago. EDME looking into why this happened
11:47 AM - EDME Virtualization asked to join call
11:49 AM - Lars confirmed that the e3 Detentions p012 and p014 servers appear to be diving down (servers maxing out). Lars from SCM group is requesting heap dump of the servers from EWS Engineer Saurin Shah.
11:52 AM - Wes Gould states they are still investigating the rerouting of the subnet, if the rerouting could be the cause of the issues.
11:56 AM - Wes requested a status of the heap dump, EWS Engineer Saurin Shah, advised they are working on the request
11:59 AM - AO portal displays fail time connecting to the EID with a page timeout at 60.11943
12:00 PM - EWS Engineer Saurin Shah confirmed the first Heap dump on the requested server has been complete. EWS is currently working on completing the second Heap dump.
12:06 PM - Wes asks Lars to save the server dumps for review by others.
12:07 PM - EWS Engineer Saurin rebooted e3 detention servers
12:09 PM – EWS completed restart of e3 Detention server
12:09 PM - Windows Services group joins bridge
12:15 PM - Network asking Lars to confirm IP for server IP’s,10.5.65.50
12:22 PM - EDME is seeing an error on the packet capture – Oracle SQL - Return OPI parameter. E3 Dev team working with EDME on error
12:24 PM - Jose V asking for Andrew to join in with the application team. Oracle SQL error return status to be investigated. ORA01 (Literal does not match format string)
12:26 PM - Wes Gould update: EDME is still running down patch updates.
12:30 PM - Engineers on the bridge call are looking for confirmation from EID that they aren’t doing anything at the moment to impact the network or the database.
12:38 PM – Project Manager Jose Villafane reached out to Vidhya Dandi to request an ICE DBA to join the call, in order to run queries
12:42 PM – Wes asks Andrew to be in a separate group with EDME LAN, maybe moving to a separate bridge number.
12:50 PM – Lars with the SCM group provided heap dump to the e3 dev team for analysis.
E3 dev team analyzing heap dump – analysis was not conclusive location S:\BEMS\e3\Latency first heapdump for detentions
12:53 PM – Wes is getting ICE DBA to run some queries locally to test.
1:00 PM - EDME LAN is taking a look at the Crypto routers, counting on net flow to gather more information. Still awaiting ICE DBA to join the call. Also looking at getting OneNet engineer to join the call.
1:07 PM – Wes updated, No issues found from the subnet rerouting
1:08 PM – BEMSD Program Manager Wes Gould confirms Oracle errors seen in packet captures are not the source of the issue and are normal errors. E3 Developer Brian Fox confirmed from the Heap dump review that majority of the threads are in the BLOCKED state. Brian Fox now requesting a full thread dump from the server in question in hopes to find the stack trace for all the blocked threads.
1:10 PM – e3 Developer Andrew Smallera is requesting e3 developers to take a look at AppD for the e3 Detentions app for the last day, last 6h, last hour. 2 servers 12 and 14 died out but the rest seemed ok wonder from that if it was the DB it would probably show across servers if it was code, same thing it could be possible that some specific condition is triggering an issue in the code but then with that, we're seeing this across different applications so, do we have the same code issue getting triggered, i guess possible to me it feels like network or something specific on the server
1:15 PM – Lars with the SCM group is reporting that server uxvnwg001a1076_e3_processing_bemms-p009_ms1 is showing hung session since the previous day
1:17 PM – e3 Developer Brian Fox advised that the e3 Detentions server p014 was cut out of load balancing just after noon on 8/22/2018 because it's response time spiked
Presumably this is because all of p014s threads entered a blocked state
1:23 PM – Asking how much data is being pulled from the database, how long database calls are taking (reported on by Kelly Ray) Asking to put the e3 Home page web page up at DC1 so testing could be done.
1:32 PM – Rochelle asks Wes for an update. The packet capture is normal, so is the subnet rerouting, still waiting on ICE DBA’s to join. Rochelle stated to give them 15 more minutes then try other back channels.
1:37 PM – Lars is recycling the Processing server 009. Group is wanting the ICE DBA’s on the bridge.
1:40 PM – Tiffany McNeil from ICE joins asking if they needed PROD dba’s to join, no response
1:41 PM – Wes asking for ICE DBA Fongu to join bridge.
1:42 PM - Fongu from EID joined the call. Working with EDME on the problem IP (10.16.36.85). EID servers: 192.168.228.61/99.
2:20 PM- EDME reached out to DC1 Network engineering to locate the device: 10.16.36.85
2:20 PM - ICE DBA Fongu Ngufor sent over CPU Utilization graph from August 16th to the 23 for engineers on the bridge call to review
2:20 PM – ICE Project Manager Vidhya Dandi sent over CPU Utilization for the last 24 hours starting at 3:00 PM August 22, 2018
2:20 PM - E3/EMDE Team has matched the CPU utilization from EID with e3’s slow response times
2:25 PM – Program Manager Wes Gould requesting someone from ICE to confirm if something is running during the times of higher utilization shown on the graph
2:28 PM - ICE DBA Fongu Ngufor sent over Memory Utilization graph from August 16th to the 23 for engineers on the bridge call to review
2:28 PM – Muhammed from CBP NOC is on the line with DC1
2:30 PM – Program Manager Wes sent Mark Scott from EID a list of EID jobs that e3 received from Vidhya
2:30 PM – Wes asks Fongu if he has a graph spanning server trash collection or I/O connectivity & Fongu stated he’ll check.
2:32 PM – Andrew rejoins bridge. Wes asks for list of jobs running on EID database & ICE is checking.
2:33PM – DC1 is going to join bridge
2:37 PM – Fongu send I/O use graphic to group
2:40 PM – DC1 is asked if device 10.16.36.85 is a firewall device, or load balancer, etc. Attempts are being made to traceroute to it or enter device.
2:42 PM - ICE DBA Mark H Scott requesting a list of e3 applications that touch the EID data base
2:45 PM - Project Manager Vidhya Dandi sent over Memory Utilization graph for the last 7 days for review
2:50 pm – DC1 states the device is a server that they do NOT have access to. What device is NATting to the device 192.168.228.99 NATting to 10.16.36.90
2:54 PM - DC1/EID/EDME 10.16.36.85 IP currently mapped 1 to 1 to the EID server: 192.168.228.61
2:54 PM – DC1 engineer Victor looking into what server pool members are mapped to 10.16.36.85 IP
10.16.36.85 IP is a F5 VIP
2:56 PM - Wes asks if ICE EID DBA’s reconciled the jobs list with their list
3:04 PM – Group is asking for IP address confirmation of the device (10.16.36.85) virtual IP. Not sure if it’s a load balancer, still investigating.
3:09 PM – Device is named D1ACLPRIC003A1, Looks like some kind of mainframe device based on descriptions.
3:11 PM – ICE asked if the IP address means anything to them. DC1 reports that they don’t use that naming convention of their devices.
3:15 PM – DC1 asks why we are trying to track down the device. It is an ICE device (AIX mainframe device), confirmed. Why is it in front of the server.
3:15 PM - EDME requested Fongu to provide the CPU/IO graphs for 101
This was provided by Fongu.
3:17 PM – Jose V asks Fongu who manages this device or sets up this device.
3:24 PM – The device is listed as a Database Listener for the ICE DB
3:25 PM – Fongu is able to pull the CPU stats from the 2nd & 3rd of August.
3:29 PM – Matthew is taking over from Muhammed at the CBP NOC
3:33 PM – Discussion is about the CPU usage at ICE (from the CPU usage information sent from the ICE side) & when it goes above 80 90% it is corresponding to issues for e3. Asking about adding another server to their cluster. E3 queries are taking too long & it is confirmed by the data sent from Fongu.
3:43 PM – Discussion going up the chain about adding the extra CPU (through director Tom Queen) on ICE side. Jose wants to make sure that the Tools in place already will be used to track the spikes in the system. May close bridge for now until 8am tomorrow morning.
3:48 PM - The system failover from 10.16.36.85 to 10.16.36.90 & is not load balanced. (servers are PRIC101 & PRIC102)
3:52 PM – e3 asks for verification of the connection strings to EID. If a new CPU (server) is added it will necessitate an EID shutdown to implement. Fongu wants the e3 source IP’s (coming from Lars)
3:59 PM – Wes wants confirmation when Lars sends the IP. ICE director Tom Queen is on site at ICE & is communicating with his group. | | | EID Connectivity/ slowness impacting e3 applications | | ICE/EID | Unkown | Unknown | Unknown | EID DBA’s ,EDME LAN ,OneNet ,EWS Engineer | Application slowness across e3 core applications | N/A | ICE EID DBA | N/A | e3 applications were available but were experiencing slownesss | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 8/23/18 11:45 | 8/23/18 16:00 | 4:15 | N/A | |
Throughout the overnight hours, starting at 22:13 PM yesterday evening, the TOC has observed some very sporadic / random failures for the E3_EID application. | | OBP;#OFO | | | 8/23/18 11:45 | Technology Operations Center | N/A | Yes | N/A | Thursday 8/23/2018
11:44 AM – EDM group advised that one of the subnets rerouted traffic 35 mins ago. EDME looking into why this happened
11:47 AM - EDME Virtualization asked to join call
11:49 AM - Lars confirmed that the e3 Detentions p012 and p014 servers appear to be diving down (servers maxing out). Lars from SCM group is requesting heap dump of the servers from EWS Engineer Saurin Shah.
11:52 AM - Wes Gould states they are still investigating the rerouting of the subnet, if the rerouting could be the cause of the issues.
11:56 AM - Wes requested a status of the heap dump, EWS Engineer Saurin Shah, advised they are working on the request
11:59 AM - AO portal displays fail time connecting to the EID with a page timeout at 60.11943
12:00 PM - EWS Engineer Saurin Shah confirmed the first Heap dump on the requested server has been complete. EWS is currently working on completing the second Heap dump.
12:06 PM - Wes asks Lars to save the server dumps for review by others.
12:07 PM - EWS Engineer Saurin rebooted e3 detention servers
12:09 PM – EWS completed restart of e3 Detention server
12:09 PM - Windows Services group joins bridge
12:15 PM - Network asking Lars to confirm IP for server IP’s,10.5.65.50
12:22 PM - EDME is seeing an error on the packet capture – Oracle SQL - Return OPI parameter. E3 Dev team working with EDME on error
12:24 PM - Jose V asking for Andrew to join in with the application team. Oracle SQL error return status to be investigated. ORA01 (Literal does not match format string)
12:26 PM - Wes Gould update: EDME is still running down patch updates.
12:30 PM - Engineers on the bridge call are looking for confirmation from EID that they aren’t doing anything at the moment to impact the network or the database.
12:38 PM – Project Manager Jose Villafane reached out to Vidhya Dandi to request an ICE DBA to join the call, in order to run queries
12:42 PM – Wes asks Andrew to be in a separate group with EDME LAN, maybe moving to a separate bridge number.
12:50 PM – Lars with the SCM group provided heap dump to the e3 dev team for analysis.
E3 dev team analyzing heap dump – analysis was not conclusive location S:\BEMS\e3\Latency first heapdump for detentions
12:53 PM – Wes is getting ICE DBA to run some queries locally to test.
1:00 PM - EDME LAN is taking a look at the Crypto routers, counting on net flow to gather more information. Still awaiting ICE DBA to join the call. Also looking at getting OneNet engineer to join the call.
1:07 PM – Wes updated, No issues found from the subnet rerouting
1:08 PM – BEMSD Program Manager Wes Gould confirms Oracle errors seen in packet captures are not the source of the issue and are normal errors. E3 Developer Brian Fox confirmed from the Heap dump review that majority of the threads are in the BLOCKED state. Brian Fox now requesting a full thread dump from the server in question in hopes to find the stack trace for all the blocked threads.
1:10 PM – e3 Developer Andrew Smallera is requesting e3 developers to take a look at AppD for the e3 Detentions app for the last day, last 6h, last hour. 2 servers 12 and 14 died out but the rest seemed ok wonder from that if it was the DB it would probably show across servers if it was code, same thing it could be possible that some specific condition is triggering an issue in the code but then with that, we're seeing this across different applications so, do we have the same code issue getting triggered, i guess possible to me it feels like network or something specific on the server
1:15 PM – Lars with the SCM group is reporting that server uxvnwg001a1076_e3_processing_bemms-p009_ms1 is showing hung session since the previous day
1:17 PM – e3 Developer Brian Fox advised that the e3 Detentions server p014 was cut out of load balancing just after noon on 8/22/2018 because it's response time spiked
Presumably this is because all of p014s threads entered a blocked state
1:23 PM – Asking how much data is being pulled from the database, how long database calls are taking (reported on by Kelly Ray) Asking to put the e3 Home page web page up at DC1 so testing could be done.
1:32 PM – Rochelle asks Wes for an update. The packet capture is normal, so is the subnet rerouting, still waiting on ICE DBA’s to join. Rochelle stated to give them 15 more minutes then try other back channels.
1:37 PM – Lars is recycling the Processing server 009. Group is wanting the ICE DBA’s on the bridge.
1:40 PM – Tiffany McNeil from ICE joins asking if they needed PROD dba’s to join, no response
1:41 PM – Wes asking for ICE DBA Fongu to join bridge.
1:42 PM - Fongu from EID joined the call. Working with EDME on the problem IP (10.16.36.85). EID servers: 192.168.228.61/99.
2:20 PM- EDME reached out to DC1 Network engineering to locate the device: 10.16.36.85
2:20 PM - ICE DBA Fongu Ngufor sent over CPU Utilization graph from August 16th to the 23 for engineers on the bridge call to review
2:20 PM – ICE Project Manager Vidhya Dandi sent over CPU Utilization for the last 24 hours starting at 3:00 PM August 22, 2018
2:20 PM - E3/EMDE Team has matched the CPU utilization from EID with e3’s slow response times
2:25 PM – Program Manager Wes Gould requesting someone from ICE to confirm if something is running during the times of higher utilization shown on the graph
2:28 PM - ICE DBA Fongu Ngufor sent over Memory Utilization graph from August 16th to the 23 for engineers on the bridge call to review
2:28 PM – Muhammed from CBP NOC is on the line with DC1
2:30 PM – Program Manager Wes sent Mark Scott from EID a list of EID jobs that e3 received from Vidhya
2:30 PM – Wes asks Fongu if he has a graph spanning server trash collection or I/O connectivity & Fongu stated he’ll check.
2:32 PM – Andrew rejoins bridge. Wes asks for list of jobs running on EID database & ICE is checking.
2:33PM – DC1 is going to join bridge
2:37 PM – Fongu send I/O use graphic to group
2:40 PM – DC1 is asked if device 10.16.36.85 is a firewall device, or load balancer, etc. Attempts are being made to traceroute to it or enter device.
2:42 PM - ICE DBA Mark H Scott requesting a list of e3 applications that touch the EID data base
2:45 PM - Project Manager Vidhya Dandi sent over Memory Utilization graph for the last 7 days for review
2:50 pm – DC1 states the device is a server that they do NOT have access to. What device is NATting to the device 192.168.228.99 NATting to 10.16.36.90
2:54 PM - DC1/EID/EDME 10.16.36.85 IP currently mapped 1 to 1 to the EID server: 192.168.228.61
2:54 PM – DC1 engineer Victor looking into what server pool members are mapped to 10.16.36.85 IP
10.16.36.85 IP is a F5 VIP
2:56 PM - Wes asks if ICE EID DBA’s reconciled the jobs list with their list
3:04 PM – Group is asking for IP address confirmation of the device (10.16.36.85) virtual IP. Not sure if it’s a load balancer, still investigating.
3:09 PM – Device is named D1ACLPRIC003A1, Looks like some kind of mainframe device based on descriptions.
3:11 PM – ICE asked if the IP address means anything to them. DC1 reports that they don’t use that naming convention of their devices.
3:15 PM – DC1 asks why we are trying to track down the device. It is an ICE device (AIX mainframe device), confirmed. Why is it in front of the server.
3:15 PM - EDME requested Fongu to provide the CPU/IO graphs for 101
This was provided by Fongu.
3:17 PM – Jose V asks Fongu who manages this device or sets up this device.
3:24 PM – The device is listed as a Database Listener for the ICE DB
3:25 PM – Fongu is able to pull the CPU stats from the 2nd & 3rd of August.
3:29 PM – Matthew is taking over from Muhammed at the CBP NOC
3:33 PM – Discussion is about the CPU usage at ICE (from the CPU usage information sent from the ICE side) & when it goes above 80 90% it is corresponding to issues for e3. Asking about adding another server to their cluster. E3 queries are taking too long & it is confirmed by the data sent from Fongu.
3:43 PM – Discussion going up the chain about adding the extra CPU (through director Tom Queen) on ICE side. Jose wants to make sure that the Tools in place already will be used to track the spikes in the system. May close bridge for now until 8am tomorrow morning.
3:48 PM - The system failover from 10.16.36.85 to 10.16.36.90 & is not load balanced. (servers are PRIC101 & PRIC102)
3:52 PM – e3 asks for verification of the connection strings to EID. If a new CPU (server) is added it will necessitate an EID shutdown to implement. Fongu wants the e3 source IP’s (coming from Lars)
3:59 PM – Wes wants confirmation when Lars sends the IP. ICE director Tom Queen is on site at ICE & is communicating with his group. | | | EID Connectivity/ slowness impacting e3 applications | | ICE/EID | Unkown | Unknown | Unknown | EID DBA’s ,EDME LAN ,OneNet ,EWS Engineer | Application slowness across e3 core applications | N/A | ICE EID DBA | N/A | e3 applications were available but were experiencing slownesss | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 8/16/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 8/16/18 05:00 | 8/16/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 8/16/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 8/16/18 05:00 | 8/16/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Detentions | 8/16/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 8/16/18 05:00 | 8/16/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 OASISS | 8/16/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 8/16/18 05:00 | 8/16/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Processing | 8/16/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 8/16/18 05:00 | 8/16/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Prosecutions | 8/16/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 8/16/18 05:00 | 8/16/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Detentions | 8/14/18 03:00 | 8/15/18 18:30 | 15:30 | | | e3 support reached out to ICE DBA's due to some EID FAILS noticed in the Auto Ops Portal. E3 support received call from the TOC around 3:00am on 8/14/18 in the morning about the page time outs. During this time e3 Support checked AppDynamics and all applications and confirmed they were accessible with no issues except for Detentions (Slowness logging in). A monitoring of AO Portal has showed intermittent EID connectivity issues but none worthy of starting a bridge call. E3 support looped in the SCM BEIB team & our review of AppDynamics showed that there was some connection slowness for e3 Processing and e3 Detention, but the applications were working. Overall EID connectivity with other applications appeared within the norms with no spikes. E3 Support reviewed connectivity for an additional 60 minutes in they noticed connectivity continued to be intermittent. | | OBP;#OFO;#OFO/SIGMA | | | 8/14/18 03:00 | TOC | 11361219 | Yes | |
Tue 8/14/2018 10:43 AM Melese CBP NOC
Tue 8/14/2018 10:47 AM Eddie wally From Ice oracle DBA team lead- when through logs and sessions to see if there were tables blocks. Every sequel query running has been completed.
Tue 8/14/2018 10:55 AM LARS SCM team joined the call
Tue 8/14/2018 10:58 AM Stephanie Teirno joined the call
Tue 8/14/2018 11:00 AM Vidhya, Dandi ICE joined the call
Tue 8/14/2018 11:03 AM CBP NOC asked if EDME could join the bridge call
Tue 8/14/2018 11:03 AM Francis K. Ocran Unix joins the call
Tue 8/14/2018 11:04 AM Dutuy officer request EDME LAN to join the call
Tue 8/14/2018 11:08 AM Willie Williams EDME Lan joined the call
Tue 8/14/2018 11:08 AM Antione UNIX support joined the call
Tue 8/14/2018 11:08 AM Willie Williams Joins (EDME Lan)
Tue 8/14/2018 11:11 AM ICE DBA’s haven’t discovered anything on their side. Discussion of previous ICE backup issues but they have no bearing on this issue
Tue 8/14/2018 11:16 AM Vidhya from ICE pulled from previous notes that they weren’t able to capture any packets that were helpful to remedying this issue.
Tue 8/14/2018 11:24 AM Adewale (Wally) spoke with LAN group & they want to know which Specific Modules are being affected (Detentions, Processing) Can run a TCP dump to track issue from source to destination. Needing source IP’s to destination IP’s. UNIX group states the problem is in between e3 servers to EID database, not with database (network)
Tue 8/14/2018 11:30 AM Brandon from e3 Support suggested running packet capture longer than 2 minutes but SCM group says running it longer will negatively affect everything and cause more issues. Last AO Portal fail was 11:15a.
Tue 8/14/2018 11:34 AM SCM team checking what jobs are running/ Unable to confirm at this point and time
Tue 8/14/2018 11:35 AM Lars recounted the EID failures from this morning, intermittent from 12:43a. APPD down or rather being updated, cannot gather any information at the moment.
Tue 8/14/2018 11:40 AM Brandon just noticed another fail at the portal
Tue 8/14/2018 11:44 AM Lars steps away temporarily.
Tue 8/14/2018 11:46 AM Brandon inquired as to have the TCP dumps started capturing. Vidhya is confirming. Francis with UNIX is checking through the dumps now
Tue 8/14/2018 11:54 AM Vidhya inquires if there has been a spike on the portal. Lars responds yes but not a huge spike. Last big spike was at 11:45am.
Tue 8/14/2018 11:57 AM Francis Wanted the names of the most affected VM’s. Hostnames given.
Tue 8/14/2018 11:59 AM Vidhya sending TCP dumps to LARS.
Tue 8/14/2018 12:00 PM Vidhya may want ONEnet on the call to parse traffic going through DC1.
Tue 8/14/2018 12:03 PM Devin reached out to the duty officers to have them engage OneNet to join bridge.
Tue 8/14/2018 12:09 PM Lars received dumps & forwarded them on.
Tue 8/14/2018 12:10 PM Need Help desk ticket to engage OneNet. Vidhya wants OneNet to look at traffic from 3am this morning.
Tue 8/14/2018 12:19 PM Lars want to setup a later time for EDME to grab a TCP dump later today (6 to 7pm) for a different baseline.
Tue 8/14/2018 12:21 PM The NOC ran a packet capture from 11am to now showing the retransmission fail
Tue 8/14/2018 12:26 PM OneNet is about to join bridge.
Tue 8/14/2018 12:29 PM Bibhu states Tier2 from OneNet is Going to join
Tue 8/14/2018 12:32 PM Carl from Tier2 OneNet joined, reviewing TCP dumps at the moment. Between e3 processing server to EID at DC1. A lot of retransmissions.
Tue 8/14/2018 12:36 PM Carl is checking the OneNet firewalls now.
Tue 8/14/2018 12:47 PM Vidhya is leaving bridge & wants update from Lars later. Carl didn’t find any issues in the packet dumps
Tue 8/14/2018 12:48 PM Lars noted another spike with Processing.
Tue 8/14/2018 12:50 PM No errors with the client in connecting to DC1
Tue 8/14/2018 12:58 PM Bibhu is asking how many CPU utilizations are their going on
Tue 8/14/2018 1:00 PM Explanation of why the client keeps retransmitting. (not hitting
Tue 8/14/2018 1:08 PM Carl is retracing the route the traffic is going on, over the last three weeks.
Tue 8/14/2018 1:18 PM Bibhu asked if there was an update, Carl responded no. Carl also could not see the retransmissions.
Tue 8/14/2018 1:27 PM Carl is still checking on the firewalls. Bibhu wants to escalate to Tier3. System spiked at 693000 ms, system will timeout at 300000 ms
Tue 8/14/2018 1:31 PM Lars stated current time is 177000ms, average is 4000 but system is operating properly
Tue 8/14/2018 1:36 PM Carl asked how many users is this impacting and SCM responded more than 500 users. Escalated to tier3 now.
Tue 8/14/2018 2:01 PM Carl from OneNet advised someone from tier 3 will be joining the call shortly
Tue 8/14/2018 2:19 PM Ariya from OneNet joined
Tue 8/14/2018 2:26 PM LARS from the SCM team is running pings from
Tue 8/14/2018 3:03 PM Ariya from OneNet continues to review packets which in his opinion are too large
Tue 8/14/2018 3:09 PM SCM group wondering why packet is so large? Why is problem so intermittent. No packet loss in pinging EID servers. There shouldn’t be packet over 1400 but group is seeing packet sizes of 1500 & 1600. Movement now is to what other devices at DC1 could be generating packets this large.
Tue 8/14/2018 3:15 PM Ruled out backup traffic as a culprit. Routers should not be sending packets that large & what is causing it to be resize them.
Tue 8/14/2018 3:20 PM Group wants EDME to send ping packets of a specific size from servers to DC1 & capture them.
Tue 8/14/2018 3:30 PM The packets aren’t making it through to DC1 at 1400 MTU but are at 1500 MTU.
Tue 8/14/2018 3:30 PM Normal traffic has been seen for the last 30 minutes
Tue 8/14/2018 3:48 PM Discussion of recently sent packets Latest packet (3:13pm) came back at 1632MTU. Are the packets being optimized by Riverbed Technology. (yes)
Tue 8/14/2018 3:57 PM Discussion of why the routers are sending larger or smaller packet sizes (Keep data from becoming fragmented)
Tue 8/14/2018 3:59 PM Bibhu checked the Riverbed logs and found no new information.
Tue 8/14/2018 4:05 PM Pings are continuing to run properly without issue. All e3 servers are running green with good health-checks. Little slowness noted.
Tue 8/14/2018 4:10 PM Discussion of why packet size isn’t consistent leaving the server side
Tue 8/14/2018 4:14 PM Discussion of tabling for the rest of today & reconvening tomorrow when the issue reappears. Will attempt to bring back on all the same participants.
Tue 8/14/2018 4:19 PM Carl from Tier2 has commitments that may keep him from rejoining tomorrow. Going is going to investigate the Riverbed MTU settings.
Tue 8/14/2018 4:26 PM Discussion of setting the Riverbed or Steelheads to “Pass through” setting so the information is NOT optimize in order to get new baseline information
Tue 8/14/2018 4:29 PM Latest numbers, 17000ms (avg is 4000). Getting Gov. PM lead to sign off setting riverbeds to “Pass Through” setting.
Tue 8/14/2018 4:33 PM Jose Villafane gives his approval for the Pass through option. Carl at Tier2 (OneNet) is setting up the paperwork. Discussion of what exactly “Pass though” means to the e3 traffic for just one (1) server.
Tue 8/14/2018 4:40 PM Discussion of how long to leave system in pass through state. No set time frame was mentioned.
Tue 8/14/2018 4:52 PM Riverbed “Pass Through” reconfigure has been completed & now configurations are being made on the e3 processing server side.
Tue 8/14/2018 4:56 PM e3 Processing Sever being restarted
Tue 8/14/2018 5:01 PM Verification that the server traffic is NOT being optimize through the riverbed device. Verification by SCM group that there is no noticeable change being seen.
Tue 8/14/2018 5:04 PM obtaining new packet capture readings.
Tue 8/14/2018 5:10 PM Lars SCM team confirmed the Riverbed Pass Through is not making much more of a difference
Bridge call started 10:00 AM
Wed 8/15/2018 10:03 AM Pawan DHS OneNet Tier2 joined the call
Wed 8/15/2018 10:03 AM SCM Lars joined the call
Wed 8/15/2018 10:10 AM Melese CBP NOC joined the call
Wed 8/15/2018 10:10 AM Carl DHS OneNet Tier2 joined the call
Wed 8/15/2018 10:11 AM Ariya DHS OneNet Tier 3 joined the call
Wed 8/15/2018 10:14 AM Pawan DHS OneNet Tier2 dropped off the call
Wed 8/15/2018 10:28 AM Melese CBP NOC performed packet capture and didn’t see anything, Yesterday engineers were seeing big packets yesterday and today they weren’t
Wed 8/15/2018 10:49 AM Bridge call ended with the understanding that e3 support will monitor the applications for the remainder of the day and reconvene Thursday morning at 11:00 AM to decide if on Riverbed Pass through. Engineers on the call did not see any issues with e3 applications connecting the EID. CBP NOC performed packet capture, in which they confirmed there were no large packets being received compared to the large packages received the previous day. Engineers on the bridge call also compared the TCP dumps gathered from the EID DBA.s from 10: AM and 6:00 PM on Tuesday. They noticed the packages around 6:00 PM were small packets, as a result engineers agreed to leave the Riverbed in a Pass-through state until Thursday. If the issue arises again engineers plan to see if the server responds different following the Riverbed bypass.
Tue 8/15/2018 4:45pm Bridge convened
Tue 8/15/2018 5:08pm ICE DBA joins, Lars on separate Call with NOC
Tue 8/15/2018 5:20pm Jose Villafane made reference to a similar incident from August of 2016, Asked Brandon from e3 to look into it
Tue 8/15/2018 5:42pm ORACLE DBA confirms that they aren’t running any backups or anything in prod
Tue 8/15/2018 5:44pm Kelly Ray Joins, Wes Gould wants to know if data can be gathered from 2 years back, for confirmation on this issue.
Tue 8/15/2018 5:48pm Wes asked for confirmation from the duty officers if any other groups are complaining about latency issues, not just e3, Response, no.
Tue 8/15/2018 5:51pm Lars reiterates the information from yesterday About the re-transmission. Brandon finds SharePoint information about a transmission issue from August 22, 2016.
Tue 8/15/2018 5:54pm Wes asks if we can stop the replication traffic to see if this corrects issue. Jose stated that he & Mike French found out a specific tool that found the packet problem then.
Tue 8/15/2018 6:02pm AO Portal has shown no failures since 4:30pm. Riverbed discussion again.
Tue 8/15/2018 6:04pm Michael Amadi from network joins. States this isn’t the first time we are seeing this behavior with packets through WebLogic
Tue 8/15/2018 6:08pm Discussion of Oracle Patching as a possible cause?.....
Tue 8/15/2018 6:15pm Kelly Ray is sending the data behind the graph to e3 management
Tue 8/15/2018 6:22pm Lars asks when did ICE move to Oracle 12.2? are we missing a patch?
Tue 8/15/2018 6:30 pm Bridge call end with all parties agreeing to rejoin if the issue resurfaces | | | E3 Service Degradation/Latency -EID Slowness | Unkown | E3 | N/A | N/A | N/A | UNIX, EDME LAN, CBP NOC, ICE DBA, SCM group | e3 Service Degradation/Latency -EID Slowness impacting e3 Processing and Detentions | N/A | N/A | N/A | e3 Service Degradation/Latency -EID Slowness impacting e3 Processing and Detentions | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Processing | 8/14/18 03:00 | 8/15/18 18:30 | 15:30 | | | e3 support reached out to ICE DBA's due to some EID FAILS noticed in the Auto Ops Portal. E3 support received call from the TOC around 3:00am on 8/14/18 in the morning about the page time outs. During this time e3 Support checked AppDynamics and all applications and confirmed they were accessible with no issues except for Detentions (Slowness logging in). A monitoring of AO Portal has showed intermittent EID connectivity issues but none worthy of starting a bridge call. E3 support looped in the SCM BEIB team & our review of AppDynamics showed that there was some connection slowness for e3 Processing and e3 Detention, but the applications were working. Overall EID connectivity with other applications appeared within the norms with no spikes. E3 Support reviewed connectivity for an additional 60 minutes in they noticed connectivity continued to be intermittent. | | OBP;#OFO;#OFO/SIGMA | | | 8/14/18 03:00 | TOC | 11361219 | Yes | |
Tue 8/14/2018 10:43 AM Melese CBP NOC
Tue 8/14/2018 10:47 AM Eddie wally From Ice oracle DBA team lead- when through logs and sessions to see if there were tables blocks. Every sequel query running has been completed.
Tue 8/14/2018 10:55 AM LARS SCM team joined the call
Tue 8/14/2018 10:58 AM Stephanie Teirno joined the call
Tue 8/14/2018 11:00 AM Vidhya, Dandi ICE joined the call
Tue 8/14/2018 11:03 AM CBP NOC asked if EDME could join the bridge call
Tue 8/14/2018 11:03 AM Francis K. Ocran Unix joins the call
Tue 8/14/2018 11:04 AM Dutuy officer request EDME LAN to join the call
Tue 8/14/2018 11:08 AM Willie Williams EDME Lan joined the call
Tue 8/14/2018 11:08 AM Antione UNIX support joined the call
Tue 8/14/2018 11:08 AM Willie Williams Joins (EDME Lan)
Tue 8/14/2018 11:11 AM ICE DBA’s haven’t discovered anything on their side. Discussion of previous ICE backup issues but they have no bearing on this issue
Tue 8/14/2018 11:16 AM Vidhya from ICE pulled from previous notes that they weren’t able to capture any packets that were helpful to remedying this issue.
Tue 8/14/2018 11:24 AM Adewale (Wally) spoke with LAN group & they want to know which Specific Modules are being affected (Detentions, Processing) Can run a TCP dump to track issue from source to destination. Needing source IP’s to destination IP’s. UNIX group states the problem is in between e3 servers to EID database, not with database (network)
Tue 8/14/2018 11:30 AM Brandon from e3 Support suggested running packet capture longer than 2 minutes but SCM group says running it longer will negatively affect everything and cause more issues. Last AO Portal fail was 11:15a.
Tue 8/14/2018 11:34 AM SCM team checking what jobs are running/ Unable to confirm at this point and time
Tue 8/14/2018 11:35 AM Lars recounted the EID failures from this morning, intermittent from 12:43a. APPD down or rather being updated, cannot gather any information at the moment.
Tue 8/14/2018 11:40 AM Brandon just noticed another fail at the portal
Tue 8/14/2018 11:44 AM Lars steps away temporarily.
Tue 8/14/2018 11:46 AM Brandon inquired as to have the TCP dumps started capturing. Vidhya is confirming. Francis with UNIX is checking through the dumps now
Tue 8/14/2018 11:54 AM Vidhya inquires if there has been a spike on the portal. Lars responds yes but not a huge spike. Last big spike was at 11:45am.
Tue 8/14/2018 11:57 AM Francis Wanted the names of the most affected VM’s. Hostnames given.
Tue 8/14/2018 11:59 AM Vidhya sending TCP dumps to LARS.
Tue 8/14/2018 12:00 PM Vidhya may want ONEnet on the call to parse traffic going through DC1.
Tue 8/14/2018 12:03 PM Devin reached out to the duty officers to have them engage OneNet to join bridge.
Tue 8/14/2018 12:09 PM Lars received dumps & forwarded them on.
Tue 8/14/2018 12:10 PM Need Help desk ticket to engage OneNet. Vidhya wants OneNet to look at traffic from 3am this morning.
Tue 8/14/2018 12:19 PM Lars want to setup a later time for EDME to grab a TCP dump later today (6 to 7pm) for a different baseline.
Tue 8/14/2018 12:21 PM The NOC ran a packet capture from 11am to now showing the retransmission fail
Tue 8/14/2018 12:26 PM OneNet is about to join bridge.
Tue 8/14/2018 12:29 PM Bibhu states Tier2 from OneNet is Going to join
Tue 8/14/2018 12:32 PM Carl from Tier2 OneNet joined, reviewing TCP dumps at the moment. Between e3 processing server to EID at DC1. A lot of retransmissions.
Tue 8/14/2018 12:36 PM Carl is checking the OneNet firewalls now.
Tue 8/14/2018 12:47 PM Vidhya is leaving bridge & wants update from Lars later. Carl didn’t find any issues in the packet dumps
Tue 8/14/2018 12:48 PM Lars noted another spike with Processing.
Tue 8/14/2018 12:50 PM No errors with the client in connecting to DC1
Tue 8/14/2018 12:58 PM Bibhu is asking how many CPU utilizations are their going on
Tue 8/14/2018 1:00 PM Explanation of why the client keeps retransmitting. (not hitting
Tue 8/14/2018 1:08 PM Carl is retracing the route the traffic is going on, over the last three weeks.
Tue 8/14/2018 1:18 PM Bibhu asked if there was an update, Carl responded no. Carl also could not see the retransmissions.
Tue 8/14/2018 1:27 PM Carl is still checking on the firewalls. Bibhu wants to escalate to Tier3. System spiked at 693000 ms, system will timeout at 300000 ms
Tue 8/14/2018 1:31 PM Lars stated current time is 177000ms, average is 4000 but system is operating properly
Tue 8/14/2018 1:36 PM Carl asked how many users is this impacting and SCM responded more than 500 users. Escalated to tier3 now.
Tue 8/14/2018 2:01 PM Carl from OneNet advised someone from tier 3 will be joining the call shortly
Tue 8/14/2018 2:19 PM Ariya from OneNet joined
Tue 8/14/2018 2:26 PM LARS from the SCM team is running pings from
Tue 8/14/2018 3:03 PM Ariya from OneNet continues to review packets which in his opinion are too large
Tue 8/14/2018 3:09 PM SCM group wondering why packet is so large? Why is problem so intermittent. No packet loss in pinging EID servers. There shouldn’t be packet over 1400 but group is seeing packet sizes of 1500 & 1600. Movement now is to what other devices at DC1 could be generating packets this large.
Tue 8/14/2018 3:15 PM Ruled out backup traffic as a culprit. Routers should not be sending packets that large & what is causing it to be resize them.
Tue 8/14/2018 3:20 PM Group wants EDME to send ping packets of a specific size from servers to DC1 & capture them.
Tue 8/14/2018 3:30 PM The packets aren’t making it through to DC1 at 1400 MTU but are at 1500 MTU.
Tue 8/14/2018 3:30 PM Normal traffic has been seen for the last 30 minutes
Tue 8/14/2018 3:48 PM Discussion of recently sent packets Latest packet (3:13pm) came back at 1632MTU. Are the packets being optimized by Riverbed Technology. (yes)
Tue 8/14/2018 3:57 PM Discussion of why the routers are sending larger or smaller packet sizes (Keep data from becoming fragmented)
Tue 8/14/2018 3:59 PM Bibhu checked the Riverbed logs and found no new information.
Tue 8/14/2018 4:05 PM Pings are continuing to run properly without issue. All e3 servers are running green with good health-checks. Little slowness noted.
Tue 8/14/2018 4:10 PM Discussion of why packet size isn’t consistent leaving the server side
Tue 8/14/2018 4:14 PM Discussion of tabling for the rest of today & reconvening tomorrow when the issue reappears. Will attempt to bring back on all the same participants.
Tue 8/14/2018 4:19 PM Carl from Tier2 has commitments that may keep him from rejoining tomorrow. Going is going to investigate the Riverbed MTU settings.
Tue 8/14/2018 4:26 PM Discussion of setting the Riverbed or Steelheads to “Pass through” setting so the information is NOT optimize in order to get new baseline information
Tue 8/14/2018 4:29 PM Latest numbers, 17000ms (avg is 4000). Getting Gov. PM lead to sign off setting riverbeds to “Pass Through” setting.
Tue 8/14/2018 4:33 PM Jose Villafane gives his approval for the Pass through option. Carl at Tier2 (OneNet) is setting up the paperwork. Discussion of what exactly “Pass though” means to the e3 traffic for just one (1) server.
Tue 8/14/2018 4:40 PM Discussion of how long to leave system in pass through state. No set time frame was mentioned.
Tue 8/14/2018 4:52 PM Riverbed “Pass Through” reconfigure has been completed & now configurations are being made on the e3 processing server side.
Tue 8/14/2018 4:56 PM e3 Processing Sever being restarted
Tue 8/14/2018 5:01 PM Verification that the server traffic is NOT being optimize through the riverbed device. Verification by SCM group that there is no noticeable change being seen.
Tue 8/14/2018 5:04 PM obtaining new packet capture readings.
Tue 8/14/2018 5:10 PM Lars SCM team confirmed the Riverbed Pass Through is not making much more of a difference
Bridge call started 10:00 AM
Wed 8/15/2018 10:03 AM Pawan DHS OneNet Tier2 joined the call
Wed 8/15/2018 10:03 AM SCM Lars joined the call
Wed 8/15/2018 10:10 AM Melese CBP NOC joined the call
Wed 8/15/2018 10:10 AM Carl DHS OneNet Tier2 joined the call
Wed 8/15/2018 10:11 AM Ariya DHS OneNet Tier 3 joined the call
Wed 8/15/2018 10:14 AM Pawan DHS OneNet Tier2 dropped off the call
Wed 8/15/2018 10:28 AM Melese CBP NOC performed packet capture and didn’t see anything, Yesterday engineers were seeing big packets yesterday and today they weren’t
Wed 8/15/2018 10:49 AM Bridge call ended with the understanding that e3 support will monitor the applications for the remainder of the day and reconvene Thursday morning at 11:00 AM to decide if on Riverbed Pass through. Engineers on the call did not see any issues with e3 applications connecting the EID. CBP NOC performed packet capture, in which they confirmed there were no large packets being received compared to the large packages received the previous day. Engineers on the bridge call also compared the TCP dumps gathered from the EID DBA.s from 10: AM and 6:00 PM on Tuesday. They noticed the packages around 6:00 PM were small packets, as a result engineers agreed to leave the Riverbed in a Pass-through state until Thursday. If the issue arises again engineers plan to see if the server responds different following the Riverbed bypass.
Tue 8/15/2018 4:45pm Bridge convened
Tue 8/15/2018 5:08pm ICE DBA joins, Lars on separate Call with NOC
Tue 8/15/2018 5:20pm Jose Villafane made reference to a similar incident from August of 2016, Asked Brandon from e3 to look into it
Tue 8/15/2018 5:42pm ORACLE DBA confirms that they aren’t running any backups or anything in prod
Tue 8/15/2018 5:44pm Kelly Ray Joins, Wes Gould wants to know if data can be gathered from 2 years back, for confirmation on this issue.
Tue 8/15/2018 5:48pm Wes asked for confirmation from the duty officers if any other groups are complaining about latency issues, not just e3, Response, no.
Tue 8/15/2018 5:51pm Lars reiterates the information from yesterday About the re-transmission. Brandon finds SharePoint information about a transmission issue from August 22, 2016.
Tue 8/15/2018 5:54pm Wes asks if we can stop the replication traffic to see if this corrects issue. Jose stated that he & Mike French found out a specific tool that found the packet problem then.
Tue 8/15/2018 6:02pm AO Portal has shown no failures since 4:30pm. Riverbed discussion again.
Tue 8/15/2018 6:04pm Michael Amadi from network joins. States this isn’t the first time we are seeing this behavior with packets through WebLogic
Tue 8/15/2018 6:08pm Discussion of Oracle Patching as a possible cause?.....
Tue 8/15/2018 6:15pm Kelly Ray is sending the data behind the graph to e3 management
Tue 8/15/2018 6:22pm Lars asks when did ICE move to Oracle 12.2? are we missing a patch?
Tue 8/15/2018 6:30 pm Bridge call end with all parties agreeing to rejoin if the issue resurfaces | | | E3 Service Degradation/Latency -EID Slowness | Unkown | E3 | N/A | N/A | N/A | UNIX, EDME LAN, CBP NOC, ICE DBA, SCM group | e3 Service Degradation/Latency -EID Slowness impacting e3 Processing and Detentions | N/A | N/A | N/A | e3 Service Degradation/Latency -EID Slowness impacting e3 Processing and Detentions | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Detentions | 7/19/18 05:30 | 7/19/18 11:10 | 5:40 | | | Following the scheduled EID maintenance the e3 Processing and e3 Detentions application became unavailable. At this time the e3 Detention module and e3 Processing are in a degraded state where some users are unable to launch the e3 Detention module. It is unclear at this time if the issue is related to the EID maintenance or the e3 Maintenance this morning but we are focusing our attention at stabilizing the e3 Detention application. | | OBP | | | 7/19/18 05:30 | TOC | NA | Yes | | 5:55am - TOC send email about App Dynamics on Detention showing error
5:57 am - e3 support received a call from TOC about e3 detention error on App dynamics
6:05 am - TOC send 2nd email about App Dynamics on Processing error
6:15 am - e3 support Tigist reach out to e3 configuration manager Lars to confirm if the issue is related to the maintenance that we had this morning, he stated they are aware of the issue and they are investigating
6:16 am – Tigist from e3 support replied the email, E3 support is investigating
6:17 am – Brandon from e3 support replied the email stating this is related to the e3 maintenance. We are currently recycling our servers.
6:23am - CBP Duty Officer Joshua send an email, if Brandon can call to EID and see if they are having post implementation issues?
6:24am - Brandon replied he is in the process of reaching out to ICE
6:45 am- Jose from e3 replied the email he is reaching out to the ICE EID team to get us a DBA
6:54 am - Jose from e3 send the bridge call number to Duty Officer and TOC to join the bridge call
7:09 am – Ikram from e3 support create a ticket and update the homepage
7:10am - Tigist from e3 support join the bride call
7:48 am EID engineer Confirm Script is running and the blocking session is down as well
7:49 am – Wes from e3 join the bridge call
7:54 am – Lars from e3 confirmed e3 detention and processing server is up and running
7:55am – medina from ICE/EID stated server 29 has e3 blocking session is going down also 67 blocking session inactive
8:01 am - lars confirmed he killed all the blocking session not sure why is coming back
8:02 am – Jose stated if we can restart the server and that we clear out the session
8:03am – EID engineer start killing the script session on database and restarting the server
8:12am - Nielab from e3 joined the bridge call
8:14 am - Jimmy from ICE /EID join the bridge call
8:18 am – Jose from e3 ask Lars how is the processing apps looking ? Lars, responds it’s up and running
8:22am – Jackie from ICE/EID confirmed that session script has been killed and start blocking the session
8:27 am – Wes from e3 asking what is the impact in users, are the application down? Lars stated the application is up and running but it’s showing slow as I can see on App dynamics Detention 75% normal, 7% slow 2% error, and Processing 70% normal, 7 %very slow, and 2% Error
8:33 am – Wes asking how are we seeing on the tables? Andrew from e3 confirmed on App dynamics 2600 mile/ seconds for the last
8:35am - Jackie from ICE/EID confirmed she seeing 97 blocking session on the table and she is about to kill these blocking session
8:39am - Stephani from ICE/EID recommend to shut down the application and start back up the application that may help the App dynamics performances, Lars confirmed he will do the site down page up first
8:44am- Wes approved that they can do the site down page up to restart the app server
8:50am - Jackie from ICE/EID confirmed that she seeing 22 e3 booking blocking session, Andrew and Wes ask her if she can kill these blocking session
8:56am – Andrew and Lars confirmed processing is blocked and the app is off
9:00am - Jackie from ICE/EID confirmed Database is looking good and we have 48 e3 inactive session
9:02 am – Wes ask to Brandon to verify e3 processing, Brandon confirmed processing is launching
9:10 am – Wes from e3 asked to ICE/EID engineer what has been adding on Detention module database? ICE/EID confirmed it has been added related table on detention family unit
9:15 am - Jackie from ICE/EID confirmed seeing 25 e3 Detentions blocking session
9:29 am – Wes ask if the site down page up for detention, Lars confirmed the site down page is up
9:43am - Andrew from e3 developer seeing the issue seems subject unique Index ID is lost and If ICE/EID DBA can relax the constrain that might fix the issue but medina from ICE/EID she stated that the unique Index ID
just added yesterday but they will look in to it.
9:50am – Wes from e3 stated we have been down for 5hr and what is the status on fixing this issue and how long will it take? Stephanie from ICE/EID said to give them few minutes
10:07am – Chris from ICE/EID stated they have three plan to work on, if plan1 did not work they had to do plan 2 if that doesn’t they have to try plan 3, plan 1 is collecting subject group , plan2 grabbing statistic plan and
plan 3 put back on adding Unique Index ID
10: 27am – Medina stated to perform these three pan it will take them sometime and Wes confirmed to try as long as that fix the issue
10:45 AM – Jackie confirmed that she is creating a list now
10:50am – Medina stated since Jackie is checking the detention log session, e3 to check the apps and Lars is confirmed he doing detention apache log
10:54 am – Wes confirmed e3 Detention is available but we will keep monitoring for next hour until it’s fully
10: 55 am – the bridge call is still on going
10:56am – e3 homepage is updated
11:05am – Wes asked if they can hold for any implementation that coming up and discuss with e3, ICE/EID team confirm they will.
11:08 am – ICE/ID team and e3 confirmed all is clear
11:10am - The bridge call end | | | e3 Processing and e3 Detentions Degraded State | ICE | ICE/EID | ICE DBA's implemented the unique index to subject group and subject group member table. This was a partial rollback of their changes from last night. We confirmed on the backend that the performance is back to pre-EID release levels. Site down page has been removed and E3DM has been turned back on. E3 Support has verified and was able to launch e3DM successfully. As we were monitoring for blocked sessions, At 11:10 AM ICE confirmed that they were not seeing blocked sessions anymore and database looked clear. Bridge call ended. | Further investigation has revealed that changes that ICE made last night/this morning removed an index on one of the tables that is being used by e3DM. | ICE/e3 | ICE DBA | e3 Processing and e3 Detentions are experiencing an outage. User will not be able to Users are unable to process subjects. | | ICE DBA | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Processing | 7/19/18 05:30 | 7/19/18 11:10 | 5:40 | | | Following the scheduled EID maintenance the e3 Processing and e3 Detentions application became unavailable. At this time the e3 Detention module and e3 Processing are in a degraded state where some users are unable to launch the e3 Detention module. It is unclear at this time if the issue is related to the EID maintenance or the e3 Maintenance this morning but we are focusing our attention at stabilizing the e3 Detention application. | | OBP | | | 7/19/18 05:30 | TOC | NA | Yes | | 5:55am - TOC send email about App Dynamics on Detention showing error
5:57 am - e3 support received a call from TOC about e3 detention error on App dynamics
6:05 am - TOC send 2nd email about App Dynamics on Processing error
6:15 am - e3 support Tigist reach out to e3 configuration manager Lars to confirm if the issue is related to the maintenance that we had this morning, he stated they are aware of the issue and they are investigating
6:16 am – Tigist from e3 support replied the email, E3 support is investigating
6:17 am – Brandon from e3 support replied the email stating this is related to the e3 maintenance. We are currently recycling our servers.
6:23am - CBP Duty Officer Joshua send an email, if Brandon can call to EID and see if they are having post implementation issues?
6:24am - Brandon replied he is in the process of reaching out to ICE
6:45 am- Jose from e3 replied the email he is reaching out to the ICE EID team to get us a DBA
6:54 am - Jose from e3 send the bridge call number to Duty Officer and TOC to join the bridge call
7:09 am – Ikram from e3 support create a ticket and update the homepage
7:10am - Tigist from e3 support join the bride call
7:48 am EID engineer Confirm Script is running and the blocking session is down as well
7:49 am – Wes from e3 join the bridge call
7:54 am – Lars from e3 confirmed e3 detention and processing server is up and running
7:55am – medina from ICE/EID stated server 29 has e3 blocking session is going down also 67 blocking session inactive
8:01 am - lars confirmed he killed all the blocking session not sure why is coming back
8:02 am – Jose stated if we can restart the server and that we clear out the session
8:03am – EID engineer start killing the script session on database and restarting the server
8:12am - Nielab from e3 joined the bridge call
8:14 am - Jimmy from ICE /EID join the bridge call
8:18 am – Jose from e3 ask Lars how is the processing apps looking ? Lars, responds it’s up and running
8:22am – Jackie from ICE/EID confirmed that session script has been killed and start blocking the session
8:27 am – Wes from e3 asking what is the impact in users, are the application down? Lars stated the application is up and running but it’s showing slow as I can see on App dynamics Detention 75% normal, 7% slow 2% error, and Processing 70% normal, 7 %very slow, and 2% Error
8:33 am – Wes asking how are we seeing on the tables? Andrew from e3 confirmed on App dynamics 2600 mile/ seconds for the last
8:35am - Jackie from ICE/EID confirmed she seeing 97 blocking session on the table and she is about to kill these blocking session
8:39am - Stephani from ICE/EID recommend to shut down the application and start back up the application that may help the App dynamics performances, Lars confirmed he will do the site down page up first
8:44am- Wes approved that they can do the site down page up to restart the app server
8:50am - Jackie from ICE/EID confirmed that she seeing 22 e3 booking blocking session, Andrew and Wes ask her if she can kill these blocking session
8:56am – Andrew and Lars confirmed processing is blocked and the app is off
9:00am - Jackie from ICE/EID confirmed Database is looking good and we have 48 e3 inactive session
9:02 am – Wes ask to Brandon to verify e3 processing, Brandon confirmed processing is launching
9:10 am – Wes from e3 asked to ICE/EID engineer what has been adding on Detention module database? ICE/EID confirmed it has been added related table on detention family unit
9:15 am - Jackie from ICE/EID confirmed seeing 25 e3 Detentions blocking session
9:29 am – Wes ask if the site down page up for detention, Lars confirmed the site down page is up
9:43am - Andrew from e3 developer seeing the issue seems subject unique Index ID is lost and If ICE/EID DBA can relax the constrain that might fix the issue but medina from ICE/EID she stated that the unique Index ID
just added yesterday but they will look in to it.
9:50am – Wes from e3 stated we have been down for 5hr and what is the status on fixing this issue and how long will it take? Stephanie from ICE/EID said to give them few minutes
10:07am – Chris from ICE/EID stated they have three plan to work on, if plan1 did not work they had to do plan 2 if that doesn’t they have to try plan 3, plan 1 is collecting subject group , plan2 grabbing statistic plan and
plan 3 put back on adding Unique Index ID
10: 27am – Medina stated to perform these three pan it will take them sometime and Wes confirmed to try as long as that fix the issue
10:45 AM – Jackie confirmed that she is creating a list now
10:50am – Medina stated since Jackie is checking the detention log session, e3 to check the apps and Lars is confirmed he doing detention apache log
10:54 am – Wes confirmed e3 Detention is available but we will keep monitoring for next hour until it’s fully
10: 55 am – the bridge call is still on going
10:56am – e3 homepage is updated
11:05am – Wes asked if they can hold for any implementation that coming up and discuss with e3, ICE/EID team confirm they will.
11:08 am – ICE/ID team and e3 confirmed all is clear
11:10am - The bridge call end | | | e3 Processing and e3 Detentions Degraded State | ICE | ICE/EID | ICE DBA's implemented the unique index to subject group and subject group member table. This was a partial rollback of their changes from last night. We confirmed on the backend that the performance is back to pre-EID release levels. Site down page has been removed and E3DM has been turned back on. E3 Support has verified and was able to launch e3DM successfully. As we were monitoring for blocked sessions, At 11:10 AM ICE confirmed that they were not seeing blocked sessions anymore and database looked clear. Bridge call ended. | Further investigation has revealed that changes that ICE made last night/this morning removed an index on one of the tables that is being used by e3DM. | ICE/e3 | ICE DBA | e3 Processing and e3 Detentions are experiencing an outage. User will not be able to Users are unable to process subjects. | | ICE DBA | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 7/19/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 7/19/18 05:00 | 7/19/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 7/19/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 7/19/18 05:00 | 7/19/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Detentions | 7/19/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 7/19/18 05:00 | 7/19/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 OASISS | 7/19/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 7/19/18 05:00 | 7/19/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Processing | 7/19/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 7/19/18 05:00 | 7/19/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Prosecutions | 7/19/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 7/19/18 05:00 | 7/19/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 6/24/18 07:30 | 6/24/18 17:45 | 10:15 | N/A | | CJIS Incident Description and Impact Statement: E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed No responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and TPRS transactions. OBIM established a bridge call to contact CJIS | | OBP;#OFO | | | 6/24/18 07:30 | OBIM | 11149721 | Yes | N/A | 7:29am – e3 support received a Notification from OBIM
7:37 am – e3 support called to Notify Brandon long and left a voicemail
7:45 am – e3 support Tigist create a ticket and update the homepage
8:00 am – e3 support joined the bridge call, CJIS is having issue after a maintenance Complete
8:12am – CBP Duty Officer’s called if is e3 impacted, if there is a bridge call has been established ? e3 provided the bridge number
8:13 am – e3 support Tigist send Situation awareness
4:04pm – e3 support on bridge, CJIS is having issue after a maintenance and CJIS Engineers not acknowledge the
8:19am – e3 Backlog 262 booking & 360 TPRS at the moment.
8:20am – CBP Duty Officers joins the bridge
8:35 am – e3 support send update#2
8: 49 am – e3 Tigist send out email Notification to upper management
8:56 am – Andrea from OBIM contacted to CJIS to get an update, CJIS engineers stated they are continue troubleshooting but there is No ETA when the system up and running
8:19am – e3 Backlog 262 booking & 360 TPRS at the moment. e3 will give a backlog update every 1 hour.
8:20am – CBP Duty Officers joins bridge.
8:35 am – e3 support send update#2
8:53 am – Andrea from OBIM service desk reached out to CJIS to get an update, CJIS stated that they are still working on the issue
8:56 am – Andrea from OBIM contacted to CJIS to get an update, CJIS engineers stated they are continue troubleshooting but there is No ETA when the system up and running
9:02 am – e3 Backlog is spiking up to 277 booking & 390 TPRS from previous update at 8:35am
9:21 am – Brandon from e3 support join the call
9:48 am – OBIM dash board status update given for CBP TPRS
10:10am – e3 Backlog counts 330 Booking, 390 TPRS, No processing is occurring at the moment
10:49am – Brandon from e3 asked to duty officer to get an update from CJIS, Andre from OBIM service desk stated that he spoke with Nick CJIS Operation Control Center, he
sated CJIS engineers still working on the issue.
11:00am – OBIM Duty officer give a dash board update for TPRS showing the number is droping by 15 comparing from last 45 minutes.
11:57 am – OBIM Duty officer asked for OBIM service desk to reach out CJIS for status update.
12:00 pm – OBIM has reached out to CJIS and No response
12: 05 Pm – E3 support has reached out to the CJIS Watch desk and spoke to Nick , Nick stated the issue has been resolved but engineers are working on the backlog
transaction.
12:36 pm – The backlog of (BKG) booking transactions has count 267.
12:45 pm – OBIM is getting ready to create a list of TIDs and will forward to CJIS for processing. OBIM engineer stated it may not being process until tomorrow and Brandon
from e3 stated we can’t wait until tomorrow with 267 transaction to be process .OBIM service desk needs to reach out to OCC and let them know that the
transaction will be send over to be process but they have no contact to send the transaction
12:50 pm – Andre from OBIM service desk has reached out to the CJIS OCC (Operation Control Center ) AND, Nick stated the person who will be coming at 2:00pm will have
more inside to give an update
12:52 pm – E3 support Brandon has reached out to the CJIS Watch desk, he sated the issue has been resolve but they are still working on the backlog, from e3 stand point
there is no respond for BKG transaction.
1:00 pm – Duty officer asked to Morris if he has an escalation number, Morris stated he left a voicemail and he is looking for any additional escalation number
1:07 pm – Shawn from OBIM joining back
1:10 pm – John basset asked to Shawn if he is seeing a booking transaction is processing and want to make sure the last transaction is processing, Shawn stated the last TPRS
transaction has been processed is at 12:01 pm
2:20 pm – OBIM engineer has reached out to the CJIS OCC (Operation Control Center ) and spoke Lisa, Lisa stated that she is not aware of the issue once she find out the
Resource she will give out an update.
2:30PM – e3 Support member Nielab Joined the bridge call
2:42PM – OBIM noted there was still no movement with BKG transactions.
2:47PM – OBIM reported that they are in the process of compiling a list of TCNs to be forwarded to CJIS for investigation. TCN for one BKG transactions was submitted to CJIS for investigation. CJIS is currently looking into the BKG record.
2:52PM – Second shift Service Desk joined the bridge call
3:05PM – CJIS representative reported that they were unable to find the CBP booking record in their system. One of their IT engineer will be looking into that specific transaction. OBIM provided a second TCN to CJIS for investigation.
3:19PM – OBIM stated that transactions are still processing in real time.
3:40PM – CJIS representative reported that they were unable to locate the CBP booking record in their system.
3:46PM – Brandon asked to get the start time for when the transactions started processing in real time.
3:47PM – OBIM confirmed that transactions started to process in real time at 11:20am
3:49PM – CJIS started working on the list of TCN and reported that they were able to find the booking records in their system.
4:05PM – Current BKG backlog is at 248 for the past 10 minutes
4:07PM – Eddie from OIBM joined the bridge call
4:19PM – BKG transactions started to slow again and remains at 248.
4:20PM – OBIM reached back to OCC CJIS
4:23PM – CJIS brought in another resource to work through the backlog
4:32PM – Eddie from OBIM observed backlog started growing
4:47PM – OBIM observed backlog dramatically dropped from 248 transactions to 35 and within seconds the backlog went to 15 transactions.
4:52PM – OBIM confirmed transactions were processing in real time
5:20PM – OBIM kept monitoring for stability
5:43PM – OBIM confirmed that dashboard looked green and the system looked stable.
5:46PM – Bridge call ended. | | | CJIS Incident Description and Impact Statement | | NGI/CJIS | Criminal Justice Information System (CJIS) has confirmed that they have corrected their system's technical issue that prevented them from responding to TPRS and BKG transactions. (OBIM) has observed that CJIS transaction responses are being received in real-time and all backlogs have been processed with the exception of one transaction which is over SLA. Eddie from OBIM will be forwarding that one over the SLA transaction to CJIS representative Jennifer tomorrow morning for investigation. As of 5:46 PM it was confirmed that turnaround time for BKG and TPRS transactions are within normal range the system is stable and all transactions are processing in real time. CJIS did not identify the root cause but they did confirm that today's technical issue was due to the maintenance that took place Sunday June 24, at 1:00am. | CJIS did not identify the root cause but they did confirm that today's technical issue was due to the maintenance that took place Sunday June 24, at 1:00am. | N/A | OBIM & CJIS | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | N/A | E3 Biometrics, FPQ2 | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 6/24/18 07:30 | 6/24/18 17:45 | 10:15 | N/A | | CJIS Incident Description and Impact Statement: E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed No responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and TPRS transactions. OBIM established a bridge call to contact CJIS | | OBP;#OFO | | | 6/24/18 07:30 | OBIM | 11149721 | Yes | N/A | 7:29am – e3 support received a Notification from OBIM
7:37 am – e3 support called to Notify Brandon long and left a voicemail
7:45 am – e3 support Tigist create a ticket and update the homepage
8:00 am – e3 support joined the bridge call, CJIS is having issue after a maintenance Complete
8:12am – CBP Duty Officer’s called if is e3 impacted, if there is a bridge call has been established ? e3 provided the bridge number
8:13 am – e3 support Tigist send Situation awareness
4:04pm – e3 support on bridge, CJIS is having issue after a maintenance and CJIS Engineers not acknowledge the
8:19am – e3 Backlog 262 booking & 360 TPRS at the moment.
8:20am – CBP Duty Officers joins the bridge
8:35 am – e3 support send update#2
8: 49 am – e3 Tigist send out email Notification to upper management
8:56 am – Andrea from OBIM contacted to CJIS to get an update, CJIS engineers stated they are continue troubleshooting but there is No ETA when the system up and running
8:19am – e3 Backlog 262 booking & 360 TPRS at the moment. e3 will give a backlog update every 1 hour.
8:20am – CBP Duty Officers joins bridge.
8:35 am – e3 support send update#2
8:53 am – Andrea from OBIM service desk reached out to CJIS to get an update, CJIS stated that they are still working on the issue
8:56 am – Andrea from OBIM contacted to CJIS to get an update, CJIS engineers stated they are continue troubleshooting but there is No ETA when the system up and running
9:02 am – e3 Backlog is spiking up to 277 booking & 390 TPRS from previous update at 8:35am
9:21 am – Brandon from e3 support join the call
9:48 am – OBIM dash board status update given for CBP TPRS
10:10am – e3 Backlog counts 330 Booking, 390 TPRS, No processing is occurring at the moment
10:49am – Brandon from e3 asked to duty officer to get an update from CJIS, Andre from OBIM service desk stated that he spoke with Nick CJIS Operation Control Center, he
sated CJIS engineers still working on the issue.
11:00am – OBIM Duty officer give a dash board update for TPRS showing the number is droping by 15 comparing from last 45 minutes.
11:57 am – OBIM Duty officer asked for OBIM service desk to reach out CJIS for status update.
12:00 pm – OBIM has reached out to CJIS and No response
12: 05 Pm – E3 support has reached out to the CJIS Watch desk and spoke to Nick , Nick stated the issue has been resolved but engineers are working on the backlog
transaction.
12:36 pm – The backlog of (BKG) booking transactions has count 267.
12:45 pm – OBIM is getting ready to create a list of TIDs and will forward to CJIS for processing. OBIM engineer stated it may not being process until tomorrow and Brandon
from e3 stated we can’t wait until tomorrow with 267 transaction to be process .OBIM service desk needs to reach out to OCC and let them know that the
transaction will be send over to be process but they have no contact to send the transaction
12:50 pm – Andre from OBIM service desk has reached out to the CJIS OCC (Operation Control Center ) AND, Nick stated the person who will be coming at 2:00pm will have
more inside to give an update
12:52 pm – E3 support Brandon has reached out to the CJIS Watch desk, he sated the issue has been resolve but they are still working on the backlog, from e3 stand point
there is no respond for BKG transaction.
1:00 pm – Duty officer asked to Morris if he has an escalation number, Morris stated he left a voicemail and he is looking for any additional escalation number
1:07 pm – Shawn from OBIM joining back
1:10 pm – John basset asked to Shawn if he is seeing a booking transaction is processing and want to make sure the last transaction is processing, Shawn stated the last TPRS
transaction has been processed is at 12:01 pm
2:20 pm – OBIM engineer has reached out to the CJIS OCC (Operation Control Center ) and spoke Lisa, Lisa stated that she is not aware of the issue once she find out the
Resource she will give out an update.
2:30PM – e3 Support member Nielab Joined the bridge call
2:42PM – OBIM noted there was still no movement with BKG transactions.
2:47PM – OBIM reported that they are in the process of compiling a list of TCNs to be forwarded to CJIS for investigation. TCN for one BKG transactions was submitted to CJIS for investigation. CJIS is currently looking into the BKG record.
2:52PM – Second shift Service Desk joined the bridge call
3:05PM – CJIS representative reported that they were unable to find the CBP booking record in their system. One of their IT engineer will be looking into that specific transaction. OBIM provided a second TCN to CJIS for investigation.
3:19PM – OBIM stated that transactions are still processing in real time.
3:40PM – CJIS representative reported that they were unable to locate the CBP booking record in their system.
3:46PM – Brandon asked to get the start time for when the transactions started processing in real time.
3:47PM – OBIM confirmed that transactions started to process in real time at 11:20am
3:49PM – CJIS started working on the list of TCN and reported that they were able to find the booking records in their system.
4:05PM – Current BKG backlog is at 248 for the past 10 minutes
4:07PM – Eddie from OIBM joined the bridge call
4:19PM – BKG transactions started to slow again and remains at 248.
4:20PM – OBIM reached back to OCC CJIS
4:23PM – CJIS brought in another resource to work through the backlog
4:32PM – Eddie from OBIM observed backlog started growing
4:47PM – OBIM observed backlog dramatically dropped from 248 transactions to 35 and within seconds the backlog went to 15 transactions.
4:52PM – OBIM confirmed transactions were processing in real time
5:20PM – OBIM kept monitoring for stability
5:43PM – OBIM confirmed that dashboard looked green and the system looked stable.
5:46PM – Bridge call ended. | | | CJIS Incident Description and Impact Statement | | NGI/CJIS | Criminal Justice Information System (CJIS) has confirmed that they have corrected their system's technical issue that prevented them from responding to TPRS and BKG transactions. (OBIM) has observed that CJIS transaction responses are being received in real-time and all backlogs have been processed with the exception of one transaction which is over SLA. Eddie from OBIM will be forwarding that one over the SLA transaction to CJIS representative Jennifer tomorrow morning for investigation. As of 5:46 PM it was confirmed that turnaround time for BKG and TPRS transactions are within normal range the system is stable and all transactions are processing in real time. CJIS did not identify the root cause but they did confirm that today's technical issue was due to the maintenance that took place Sunday June 24, at 1:00am. | CJIS did not identify the root cause but they did confirm that today's technical issue was due to the maintenance that took place Sunday June 24, at 1:00am. | N/A | OBIM & CJIS | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | N/A | E3 Biometrics, FPQ2 | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 6/22/18 15:00 | 6/22/18 19:45 | 4:45 | N/A | | E3 support received notification from OBIM, that engineers have observed responses from NGI are not being received. The backlog increased to 40 Booking (BKG) and 80 Ten Print Response (TPRS) transactions pending. E3 support joined to bridge call with the CBP Duty Officer and OBIM PAS team to investigate. CJIS confirmed they were moving traffic back & forth between different nodes trying to reestablish connectivity. After CJIS made several software fixes, OBIM observed transactions started to decrease and were slowly processing. At 7:43 PM, all transaction responses from the Criminal Justice Information System (CJIS) were being returned to the IDENT System in real time. CJIS processed the remainder of the transactions manually and confirmed they are back operational. The backlog has fully drained with the exception of 2 TPRS transactions over SLA. The remaining 2 transactions over SLA will be sent over to e3 Development team to have a script ran and sent to ICE for processing. CJIS did not provide further detail about the root cause. | | OBP;#OFO | | | 6/22/18 15:40 | OBIM | 11147993 | Yes | N/A | 3:00pm Start of incident
3:40pm Notification from OBIM
3:52pm e3 Asked Duty Officer’s to Spin up bridge call. e3 in talks with OBIM, getting details…
4:04pm e3 on bridge, CJIS is having a hard failure, TPRS is completely down & CJIS is completely aware (System errors on their side). Engineers are working
4:07pm John Bassett taking over for Mike Shehata, Tim Draper, Eddie Kao, Brandon Long, Terry Hall, Devin Blanch from e3 on call.
4:11pm e3 Backlog 40 booking & 80 TPRS at the moment.
4:12pm Will Pierce of the Duty Officers joins bridge.
4:14pm Brandon asks if CJIS had an ETA on the fix or repair, but they gave none.
4:19pm e3 Backlog counts, 61 Booking, TPRS 115
4:23pm Confirmation of time OBIM reached out to CJIS (3:20pm) Eddie is reaching out again to get an ETA of the fix
4:31pm Eddie’s contact with CJIS was that they still do not have an ETA but engineers are actively working issue. Duty Officer states we will check with CJIS every 30 minutes for a status.
4:33pm e3 will give a backlog update every 15 minutes.
4:41pm Maurice joins from a different phone number.
4:42pm John Bassett rejoins stated that CJIS is moving traffic back & forth between different nodes trying to reestablish connectivity.
4:45pm e3 Backlog counts are 77 Bookings, 156 TPRS.
4:51pm Will Pierce asked if this was related to the GE issue going on simultaneously & john Bassett stated that it was not related.
4:55pm JP from OBIM PAS joins bridge (Chief engineer for OBIM).
5:01pm Paula at CJIS confirmed they are still down with no ETA
5:02pm Jake From e3 rejoins call.
5:06pm e3 Backlog counts 84 Booking, 180 TPRS. Question was asked if any were processing & Terry stated nothing has processed through in the last 30 minutes.
7:15pm Terry from e3 Support drops the call
7:17 pm OBIM engineer asked to Duty Officer to get clarification about the update that was sent out stating saying that connection b/n CBP and OBIM
7:25 pm e3 Backlog counts 48 Booking, 3 TPRS
7:26pm Brandon joins from a different phone number
7:31 PM Will Duty officer stated he will send a correction and give a clarification about the question
7:33 Pm e3 Backlog counts 14 Booking, 3 TPRS
7:34pm Brandon stated to OBIM Engineer we can set the transaction to error and reach out to user and have them resubmit
7:36 Pm Brandon from e3 support asked to Duty officer if CJIS can give us an update
7:39 pm e3 Backlog counts 8 Booking, 3 TPRS
7:42pm CJIS Confirmed the issue has corrected, booking transaction has been processing in real time
7:43 pm Brandon from e3 confirmed the transaction in real time, e3 backlog count 3 Booking, 3 TPRS
7:48: PM E3 Support drop the call, a resolution time will be as of 7:43pm | | | CJIS is having issues processing Booking and TPRS Transactions | | NGI/CJIS | As of 7:43 PM, all transaction responses from the Criminal Justice Information System (CJIS) are being returned to the IDENT System in real time. CJIS has processed the remainder of the transactions manually and confirmed they are back operational. The IAFIS transaction backlog has fully drained with the exception of 2 TPRS transactions over SLA. The remaining 2 transactions over SLA will be sent over to e3 Development team to have a script ran and sent to ICE for processing. | CJIS did not identify the root cause | N/A | OBIM & CJIS | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS | N/A | CJIS | N/A | The application was available but the tranasction not returning | E3 Biometrics & FPQ2 | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 6/22/18 15:00 | 6/22/18 19:45 | 4:45 | N/A | | E3 support received notification from OBIM, that engineers have observed responses from NGI are not being received. The backlog increased to 40 Booking (BKG) and 80 Ten Print Response (TPRS) transactions pending. E3 support joined to bridge call with the CBP Duty Officer and OBIM PAS team to investigate. CJIS confirmed they were moving traffic back & forth between different nodes trying to reestablish connectivity. After CJIS made several software fixes, OBIM observed transactions started to decrease and were slowly processing. At 7:43 PM, all transaction responses from the Criminal Justice Information System (CJIS) were being returned to the IDENT System in real time. CJIS processed the remainder of the transactions manually and confirmed they are back operational. The backlog has fully drained with the exception of 2 TPRS transactions over SLA. The remaining 2 transactions over SLA will be sent over to e3 Development team to have a script ran and sent to ICE for processing. CJIS did not provide further detail about the root cause. | | OBP;#OFO | | | 6/22/18 15:40 | OBIM | 11147993 | Yes | N/A | 3:00pm Start of incident
3:40pm Notification from OBIM
3:52pm e3 Asked Duty Officer’s to Spin up bridge call. e3 in talks with OBIM, getting details…
4:04pm e3 on bridge, CJIS is having a hard failure, TPRS is completely down & CJIS is completely aware (System errors on their side). Engineers are working
4:07pm John Bassett taking over for Mike Shehata, Tim Draper, Eddie Kao, Brandon Long, Terry Hall, Devin Blanch from e3 on call.
4:11pm e3 Backlog 40 booking & 80 TPRS at the moment.
4:12pm Will Pierce of the Duty Officers joins bridge.
4:14pm Brandon asks if CJIS had an ETA on the fix or repair, but they gave none.
4:19pm e3 Backlog counts, 61 Booking, TPRS 115
4:23pm Confirmation of time OBIM reached out to CJIS (3:20pm) Eddie is reaching out again to get an ETA of the fix
4:31pm Eddie’s contact with CJIS was that they still do not have an ETA but engineers are actively working issue. Duty Officer states we will check with CJIS every 30 minutes for a status.
4:33pm e3 will give a backlog update every 15 minutes.
4:41pm Maurice joins from a different phone number.
4:42pm John Bassett rejoins stated that CJIS is moving traffic back & forth between different nodes trying to reestablish connectivity.
4:45pm e3 Backlog counts are 77 Bookings, 156 TPRS.
4:51pm Will Pierce asked if this was related to the GE issue going on simultaneously & john Bassett stated that it was not related.
4:55pm JP from OBIM PAS joins bridge (Chief engineer for OBIM).
5:01pm Paula at CJIS confirmed they are still down with no ETA
5:02pm Jake From e3 rejoins call.
5:06pm e3 Backlog counts 84 Booking, 180 TPRS. Question was asked if any were processing & Terry stated nothing has processed through in the last 30 minutes.
7:15pm Terry from e3 Support drops the call
7:17 pm OBIM engineer asked to Duty Officer to get clarification about the update that was sent out stating saying that connection b/n CBP and OBIM
7:25 pm e3 Backlog counts 48 Booking, 3 TPRS
7:26pm Brandon joins from a different phone number
7:31 PM Will Duty officer stated he will send a correction and give a clarification about the question
7:33 Pm e3 Backlog counts 14 Booking, 3 TPRS
7:34pm Brandon stated to OBIM Engineer we can set the transaction to error and reach out to user and have them resubmit
7:36 Pm Brandon from e3 support asked to Duty officer if CJIS can give us an update
7:39 pm e3 Backlog counts 8 Booking, 3 TPRS
7:42pm CJIS Confirmed the issue has corrected, booking transaction has been processing in real time
7:43 pm Brandon from e3 confirmed the transaction in real time, e3 backlog count 3 Booking, 3 TPRS
7:48: PM E3 Support drop the call, a resolution time will be as of 7:43pm | | | CJIS is having issues processing Booking and TPRS Transactions | | NGI/CJIS | As of 7:43 PM, all transaction responses from the Criminal Justice Information System (CJIS) are being returned to the IDENT System in real time. CJIS has processed the remainder of the transactions manually and confirmed they are back operational. The IAFIS transaction backlog has fully drained with the exception of 2 TPRS transactions over SLA. The remaining 2 transactions over SLA will be sent over to e3 Development team to have a script ran and sent to ICE for processing. | CJIS did not identify the root cause | N/A | OBIM & CJIS | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS | N/A | CJIS | N/A | The application was available but the tranasction not returning | E3 Biometrics & FPQ2 | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 6/15/18 13:40 | 6/15/18 16:30 | 2:50 | N/A | |
Incident Description and Impact Statement: E3 Support has received notification from the SCM group that spikes of latency have resurfaced across the Biometric & Processing modules of e3. A bridge call has been spun up with Duty Officers, ICE EID team, CBP NOC, e3 Support, EDME Lan and the EMOC. Server dumps have been run & are being investigated.
Update 1: Network engineers continue to work with ICE DBAs to get baseline TCP dumps out to the whole group with as little impact to database users. The latency spike has normalized over the last 35 minutes. The NOC believes this is the next best plan of attacking the problem, connectivity between the application & database servers. Only one station in the field has reported any kind of latency issue so far. E3 and OBIM continue to monitor situation.
Resolved: ICE DBA’s have sent the file for the packet capture to be reviewed. Engineers have confirmed the file is received and are currently in the process of reviewing packet captures. Engineers advised that the packet captures were during times where there were no issues so they will be used as a baseline going forward. If the issues appears again there will need to be another packet capture during time frame of the latency on application server and process packet captures to correlate any issues. Bridge call has closed and will reconvene if the issue resurfaces. | | OBP;#OFO;#OFO/SIGMA | | | 6/15/18 16:30 | SCM group | 11109296 | Yes | N/A | Friday, June 15, 2018
9:05 am Tigist from e3 joins bridge call.
9:09 am engineers stated e3 Biometrics and Detentions was showing high spike for the response, networks running fine and no service disable
9:20 am e3 engineers Nikhil stated some of the service did not have JAVA CORE and network engineer said they will drill to look in App Dynamic
9:20 am Ice engineer asking for e3 team Lars how long the issue was going on and when the issue rise what is the tool to see the issue ? Lars respond it is since Wednesday and we use App dynamics
9:35 am Dc1 team stated Network team is running cache string
9:50 am Asked Duty officer if Unix team to Join on call
2:00 PM e3 support member Tigist Joined the bridge call
2:13 PM engineer still trouble shooting and asking
2:14pm - Engineers are unable to see latency issue over the network traffic across the network is not reflecting any abnormities.
2: 20 PM DC1 stated both Switch 193 port and interface is looking good
2:25 PM Duty officer ask to e3 support to reach out to Weslaco and Tucson to ask about the performance
2:25pm - John a network engineers requested a packet capture of the ICE database. ICE is hesitant because it could potentially impact their other services.
2: 28 Pm network engineer stated since there is no issue on network
2:30 PM E3 support gave out the performance Weslaco and Tucson has no issue
2:30pm - Damon stated the next plan attack should be to preform packet captures between through application servers and connection servers to the database due to what he is seeing in app dynamics. EID has 2 database .85 and . 90.
2:30pm - EID is going to run a packet capture for 2 minutes to get a baseline so hopefully the next time we see a spike it can be caught and data can be compared.
2:34 PM engineer was asking if anyone from e3 can help him with App dynamics server ,Wes from e3 respond back the server chart shows the same pattern, the engineers was trouble shooting and seen BEMS e3 Bi responding time is 21.20 per/sec
2:42pm ICE EID Team request in order to run cache server they need some information from e3 team and Unix admin team will be the one run , Lars respond he will send over
2:45 pm Unix admin team asked to NOC team what TCP run dump option to use and how often and NOC respond for 1 minutes
2:47pm - EID DBA confirm which server E3 operates off of is 10.16.36.85 port 1540
2: 53 pm Unix team run the cache server for the 1st round and it was taking 1 minutes
2:55pm - EID DBA are starting to ran there packet capture
2:59pm - EID DBA had to redo the capture packet due to their file type not having enough space. EID is also currently trying to figure out how to send the file due to its size
3: 02pm e3 support team Brandon stated Chula Vesta reported they are having issue with e3 processing it is taking 20 -30 min to process the application and 15 min to start the application. Engineer confirmed she is able to see business transaction shows 100 % failure, 44 calls/min
3:05 pm CBP Director of NOD MacNeil was on the bridge call and he stated one of the server name that was requested by Unix team is a classified so he will send him an email once he requested
3:07pm - EID DBA are sending the packet capture to john McNeil
3:09pm - John has received the capture packet and is reviewing
3:15pm - ICE DBA are having issues sharing files
3:29pm - John was able to find an Ice user in his building to see if he would to share the file for the packet capture to be reviewed
3:45pm - Network engineers continue to work with ICE DBAs to get baseline TCP dumps they are still having issues sending files due to permission issues
4:01pm - due to a tool server setting the are still unable to share file
4:12pm - the file download has been completed network engineers are now coping over to a thumb drive to review the files
4:17pm - network engineers have noticed the NGI servers were reflecting high response times | | | Third Occurrence - Situational Awareness: Latency Throughout e3 Module | | E3 | After further review of all the e3 Applications for 3 day period in App Dynamics shows no reoccurrence of latency issues experienced last week by e3 since the last recorded issue between 12 – 2 on 6/15. The final review was completed @ 11:15 am today (6/18/18). | On 6/13/2018 3:57 PM E3 Support received notification that USBP agents were experiencing latency issues with E3 Applications. Upon further investigation e3 support noticed page time outs in Auto Ops and slow connectivity within AppDynamics. E3 engaged the Software Configuration Team to perform a recycle of the e3 services. A bridge call was established with CBP Duty Officer's, and the CBP NOC to investigate. CBP NOC/EDME LAN investigated the CBP network, but were unable to identify any issues. Following the recycle of the e3 services, applications started to show some improvement but were still experiencing occasional slowdowns. The AO Portal was showing green across the board however, App Dynamics continued to display high latency for the e3 applications. ICE DBA's joined the call and confirmed there were no blocked sessions. Connectivity issues cleared with no intervention and e3 Software Configuration Managers confirmed that the response times for e3 applications were within normal range. Engineers were unable to determine a root cause therefore shutting down the bridge call, with the understanding that e3 Support would continue to monitor throughout the remainder of the night and the bridge call would reconvene in the morning to follow up with the engineers for status checks. On 06/14/2018 9:40 AM E3 Support received notification from McAllen Border Patrol Station and Chula Vista that they were experiencing latency issues throughout the E3 Applications. A bridge call was established with engineers to investigate. Engineers were unable to identify a root cause and requested DHS OneNet join the bridge call to assist in remediating the situation. After extensive troubleshooting E3 support reached out to multiple sites including Weslaco, and Tucson TCC to get performance checks, in which both sites reported they were operating normally. Several servers that were showing high session rates were recycled an approx. 4:40 pm e3 support noticed response times dropping & stabilizing. After further investigation engineers were unable to identify a single entity, responsible for the latency. Engineers decided on the bridge call that they would reconvene at 8:30am in the morning. On 6/15/18 E3 Support noticed the latency issues resurfaced across e3 Biometric & Processing modules. A bridge call was spun up with Duty Officers, ICE EID team, CBP NOC, e3 Support, EDME Lan and the EMOC. Engineers performed server dumps for further investigating. Network engineers worked with ICE DBAs to get baseline TCP dumps out to the whole group with as little impact to database users. ICE DBA's sent over packet captures as a baseline going forward. Engineers on the bridge agreed that if the issues appears again there will need to be another packet capture during time frame of the latency on application server, and they would need to process packet captures to correlate any issues. After further review of all the e3 Applications for a 3 day period in App Dynamics shows no reoccurrence of latency issues experienced by e3 since the last recorded issue between on 6/15. The final review was completed at 11:15 am on 6/18/18. | N/A | DHS One Net, EMOC, CBP Duty Officers, EDME, | Launching e3 core applications took longer than normal. Agents reported extreme slowness within e3 core applications, as a result taking up to 20 -25 minutes to process subjects. Due to the latency issues, users may have experienced times outs in the applications as well as longer than normal wait times for pages to load. Agents were able to process subjects but due to latency network connectivity McAllen, Rio Grande City and some of the larger BP stations were at risk for reaching capacity. | N/A | N/A | N/A | e3 applications were available. Launching e3 core applications took longer than normal. Agents reported extreme slowness within e3 core applications, as a result taking up to 20 -25 minutes to process subjects. Due to the latency issues, users may have ex | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 6/15/18 13:40 | 6/15/18 16:30 | 2:50 | N/A | |
Incident Description and Impact Statement: E3 Support has received notification from the SCM group that spikes of latency have resurfaced across the Biometric & Processing modules of e3. A bridge call has been spun up with Duty Officers, ICE EID team, CBP NOC, e3 Support, EDME Lan and the EMOC. Server dumps have been run & are being investigated.
Update 1: Network engineers continue to work with ICE DBAs to get baseline TCP dumps out to the whole group with as little impact to database users. The latency spike has normalized over the last 35 minutes. The NOC believes this is the next best plan of attacking the problem, connectivity between the application & database servers. Only one station in the field has reported any kind of latency issue so far. E3 and OBIM continue to monitor situation.
Resolved: ICE DBA’s have sent the file for the packet capture to be reviewed. Engineers have confirmed the file is received and are currently in the process of reviewing packet captures. Engineers advised that the packet captures were during times where there were no issues so they will be used as a baseline going forward. If the issues appears again there will need to be another packet capture during time frame of the latency on application server and process packet captures to correlate any issues. Bridge call has closed and will reconvene if the issue resurfaces. | | OBP;#OFO;#OFO/SIGMA | | | 6/15/18 16:30 | SCM group | 11109296 | Yes | N/A | Friday, June 15, 2018
9:05 am Tigist from e3 joins bridge call.
9:09 am engineers stated e3 Biometrics and Detentions was showing high spike for the response, networks running fine and no service disable
9:20 am e3 engineers Nikhil stated some of the service did not have JAVA CORE and network engineer said they will drill to look in App Dynamic
9:20 am Ice engineer asking for e3 team Lars how long the issue was going on and when the issue rise what is the tool to see the issue ? Lars respond it is since Wednesday and we use App dynamics
9:35 am Dc1 team stated Network team is running cache string
9:50 am Asked Duty officer if Unix team to Join on call
2:00 PM e3 support member Tigist Joined the bridge call
2:13 PM engineer still trouble shooting and asking
2:14pm - Engineers are unable to see latency issue over the network traffic across the network is not reflecting any abnormities.
2: 20 PM DC1 stated both Switch 193 port and interface is looking good
2:25 PM Duty officer ask to e3 support to reach out to Weslaco and Tucson to ask about the performance
2:25pm - John a network engineers requested a packet capture of the ICE database. ICE is hesitant because it could potentially impact their other services.
2: 28 Pm network engineer stated since there is no issue on network
2:30 PM E3 support gave out the performance Weslaco and Tucson has no issue
2:30pm - Damon stated the next plan attack should be to preform packet captures between through application servers and connection servers to the database due to what he is seeing in app dynamics. EID has 2 database .85 and . 90.
2:30pm - EID is going to run a packet capture for 2 minutes to get a baseline so hopefully the next time we see a spike it can be caught and data can be compared.
2:34 PM engineer was asking if anyone from e3 can help him with App dynamics server ,Wes from e3 respond back the server chart shows the same pattern, the engineers was trouble shooting and seen BEMS e3 Bi responding time is 21.20 per/sec
2:42pm ICE EID Team request in order to run cache server they need some information from e3 team and Unix admin team will be the one run , Lars respond he will send over
2:45 pm Unix admin team asked to NOC team what TCP run dump option to use and how often and NOC respond for 1 minutes
2:47pm - EID DBA confirm which server E3 operates off of is 10.16.36.85 port 1540
2: 53 pm Unix team run the cache server for the 1st round and it was taking 1 minutes
2:55pm - EID DBA are starting to ran there packet capture
2:59pm - EID DBA had to redo the capture packet due to their file type not having enough space. EID is also currently trying to figure out how to send the file due to its size
3: 02pm e3 support team Brandon stated Chula Vesta reported they are having issue with e3 processing it is taking 20 -30 min to process the application and 15 min to start the application. Engineer confirmed she is able to see business transaction shows 100 % failure, 44 calls/min
3:05 pm CBP Director of NOD MacNeil was on the bridge call and he stated one of the server name that was requested by Unix team is a classified so he will send him an email once he requested
3:07pm - EID DBA are sending the packet capture to john McNeil
3:09pm - John has received the capture packet and is reviewing
3:15pm - ICE DBA are having issues sharing files
3:29pm - John was able to find an Ice user in his building to see if he would to share the file for the packet capture to be reviewed
3:45pm - Network engineers continue to work with ICE DBAs to get baseline TCP dumps they are still having issues sending files due to permission issues
4:01pm - due to a tool server setting the are still unable to share file
4:12pm - the file download has been completed network engineers are now coping over to a thumb drive to review the files
4:17pm - network engineers have noticed the NGI servers were reflecting high response times | | | Third Occurrence - Situational Awareness: Latency Throughout e3 Module | | E3 | After further review of all the e3 Applications for 3 day period in App Dynamics shows no reoccurrence of latency issues experienced last week by e3 since the last recorded issue between 12 – 2 on 6/15. The final review was completed @ 11:15 am today (6/18/18). | On 6/13/2018 3:57 PM E3 Support received notification that USBP agents were experiencing latency issues with E3 Applications. Upon further investigation e3 support noticed page time outs in Auto Ops and slow connectivity within AppDynamics. E3 engaged the Software Configuration Team to perform a recycle of the e3 services. A bridge call was established with CBP Duty Officer's, and the CBP NOC to investigate. CBP NOC/EDME LAN investigated the CBP network, but were unable to identify any issues. Following the recycle of the e3 services, applications started to show some improvement but were still experiencing occasional slowdowns. The AO Portal was showing green across the board however, App Dynamics continued to display high latency for the e3 applications. ICE DBA's joined the call and confirmed there were no blocked sessions. Connectivity issues cleared with no intervention and e3 Software Configuration Managers confirmed that the response times for e3 applications were within normal range. Engineers were unable to determine a root cause therefore shutting down the bridge call, with the understanding that e3 Support would continue to monitor throughout the remainder of the night and the bridge call would reconvene in the morning to follow up with the engineers for status checks. On 06/14/2018 9:40 AM E3 Support received notification from McAllen Border Patrol Station and Chula Vista that they were experiencing latency issues throughout the E3 Applications. A bridge call was established with engineers to investigate. Engineers were unable to identify a root cause and requested DHS OneNet join the bridge call to assist in remediating the situation. After extensive troubleshooting E3 support reached out to multiple sites including Weslaco, and Tucson TCC to get performance checks, in which both sites reported they were operating normally. Several servers that were showing high session rates were recycled an approx. 4:40 pm e3 support noticed response times dropping & stabilizing. After further investigation engineers were unable to identify a single entity, responsible for the latency. Engineers decided on the bridge call that they would reconvene at 8:30am in the morning. On 6/15/18 E3 Support noticed the latency issues resurfaced across e3 Biometric & Processing modules. A bridge call was spun up with Duty Officers, ICE EID team, CBP NOC, e3 Support, EDME Lan and the EMOC. Engineers performed server dumps for further investigating. Network engineers worked with ICE DBAs to get baseline TCP dumps out to the whole group with as little impact to database users. ICE DBA's sent over packet captures as a baseline going forward. Engineers on the bridge agreed that if the issues appears again there will need to be another packet capture during time frame of the latency on application server, and they would need to process packet captures to correlate any issues. After further review of all the e3 Applications for a 3 day period in App Dynamics shows no reoccurrence of latency issues experienced by e3 since the last recorded issue between on 6/15. The final review was completed at 11:15 am on 6/18/18. | N/A | DHS One Net, EMOC, CBP Duty Officers, EDME, | Launching e3 core applications took longer than normal. Agents reported extreme slowness within e3 core applications, as a result taking up to 20 -25 minutes to process subjects. Due to the latency issues, users may have experienced times outs in the applications as well as longer than normal wait times for pages to load. Agents were able to process subjects but due to latency network connectivity McAllen, Rio Grande City and some of the larger BP stations were at risk for reaching capacity. | N/A | N/A | N/A | e3 applications were available. Launching e3 core applications took longer than normal. Agents reported extreme slowness within e3 core applications, as a result taking up to 20 -25 minutes to process subjects. Due to the latency issues, users may have ex | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Assault | 6/14/18 09:40 | 6/15/18 11:25 | 1:45 | NA | | E3 Support has received notification from the field that they are experiencing latency issues throughout the E3 Applications. There is currently a bridge call in process with Duty Officer’s, ICE EID and EDME Lan team. The Duty Officers are in the process of reaching out to the EMOC to join the bridge call. E3 will provided updates as they come. Please see Bridge call information below. | | OBP;#OFO | | | 6/15/18 09:20 | Agents in the field | 11109296 | Yes | NA |
10:48am - Mark with ICE joined the bridge call
10:59am - CBP engineering team is looking for security scans that may be in process, EID is looking into back up Jobs, E3 is awaiting for emoc to Join the bridge call to perform a trace rout
10:59am - DC1 confirmed the latency is picking up at 10.10.72.136 at hop 9, hops 9 through 16 are showing 30 milliseconds. hop 1 through 8 were 2-3 milliseconds Hop 9 is outside of DC1 .
11:02am - Duty officers are reaching to ONET to Join the bridge call
11:04am - lars session is going to start the process of recycling intake
11:11am - The TOC Joined the bridge call
11:12am - Terry from e3 joins bridge call.
11:14am - TOC (?) Asked if impact as same as yesterday, Only site reporting slowness is McAllen. AO portal is showing green 100%
11:16am - e3 asked if Duty Officer reached out to anyone at OneNet, they are still in the process of contacting.
11:19am - (Wes) e3 Still waiting on DHS OneNet resource to join bridge.
11:27am - still trying to get a DHS OneNet resource on e3 Bridge (waiting 25 minutes)
11:32am - Wes asked SCM if Biometrics cluster needed to be restarted, SCM concurred. McAllen stated they were having issues in which modules (Biometrics & Intake) Both clusters being restarted.
11:36am - Jake with e3 stepping away temporarily.
11:39am - Max capacity hit with bridge callers, OneNet cannot join, several e3 users dropping call.
11:41am - Maryann joins Bridge. A device has been highlighted where the latency has started, tracing route of device (NOC & EDME on call) No latency found on EDME side.
11:46am - Questioned asked if EDME LAN firewall was blocking anything & EDME has no firewall. Asked if switches have been checked. DC1 is checking 10.10.72.136 (router)
11:50am - EMOC sees routers bouncing back & forth on destination traffic from source.
11:53am - Wes asked if we can find out if a replication service is running on network. Being checked.
11:55am - issues seem to be coming from CBP block 10.159.112.7 on down and it’s bouncing. Problem seems to be in routing. Trying to get routing resources on bridge.
11:58am - EDME Lan confirm switch is operating properly on this side.
12:02pm - Asked if ISD is on, Problem is not at the DMC confirmed by NOC. Still trying to get to get routing group on call.. Could NAT’ing be the problem?
12:06pm - Jake confirms they are trying to get a routing resource.
12:07pm - CBP NOC stepping away.
12:10pm - Stating that the problem is somewhere at the NDC. Wes asked to know if we could network flow analyze the bandwidth being used. APP Dynamics showed problem started around 9:30am this morning & it looks like there is a multipath problem @NDC.
12:17pm - Traceroute has narrowed down issue to hop 9 at 10.10.72.136 (resides at OneNet not NDC) is being checked.
12:21pm - 10.10.72.136 is Crypto A & 10.10.72.137 is Crypto B. e3 is asking for confirmation by NOC.
12:27pm - The NOC cannot access these devices, only EDME.
12:30 PM- Server group sending info to EDME Lan group.
12:33 PM- now awaiting for traceroute from source to destination to get done. Asking if there was any slowness on Server at NDC, Server group not seeing it. Nothing is over resourced. Traffic very minimal on server
12:38 PM- Jose V unable to access APPD. E3 is investigating. Jake Bumbrey states everything is in the green EXCEPT Biometrics.
12.42pm - A Ping test was performed and the results came back into the RED.
12:43 PM- Jose is asking if anyone has other trouble shooting techniques to remedy this latency issue. Ping test into the RED on AOPortal. No code changes on the CBP side.
DC1 states that the circuit is the weakest link. They really need netflow to diagnose these types of issues.
12:47 PM- Normal latency time is 4.5 sec, Average has been 8 seconds with spike up to 11 running over the last half hour. Need to find bandwidth hog.
12:52 PM- Something seems to be running that no one knows. Sonny with EDME Lan asked if anyone has checked the WAN router. Who has rights to WAN router.
12:54 PM- DC Crypto A router connect to the WAN router then into OneNet.
12:58p PM- Now latency checking is being done on the WAN router
1:01 PM-ALL traffic is going thru this router on a 5Gb circuit, not just CBP traffic.
1:07 PM-Wesley & Megan from routing group haven’t found any routing issues but are trying to check to see if something is running in background.
1:10 PM-no latency being found. Want to check response time from network to ICE server & back. Asking who can do TCB dump on the ICE side to check server latency.
1:13 PM-BEIB U asking what’s going on. Donna joins call Ricky from IST has all the traceroutes & hasn’t found any issues. Reaching out for any assistance.
1:23 PM-Need DC1 people on call to access device 10.16.0.18 & do a traceroute. 10.16.36.85 is an unknown device.
1:25 PM-Wes at e3 asked if there was high database usage on the ICE side. Nothing out of the normal was found.
1:35 PM-e3 called RGC to find out why they are over capacity due to e3 or volume & it’s a volume issue, NOT e3. They are processing just fine.
1:41 PM-50% drop in CBP packets found & engineers are investigating.
1:53 PM- Packets are dropping at the ICE interface firewall.
2:00 PM-e3 support member Tigist Joined the bridge call
2:13 PM engineer still trouble shooting and asking
2: 20 PM-DC1 stated both Switch 193 port and interface is looking good
2:25 PM-Duty officer ask to e3 support to reach out to Weslaco and Tucson to ask about the performance
2:30 PM-E3 support gave out the performance Weslaco and Tucson has no issue | | | Second Occurrence - Situational Awareness: Latency Throughout e3 Modules | | E3 | Resolved: E3 Support & CBP engineers continue to troubleshoot. No new calls or tickets were being received from the field experiencing issues from the latency. Around 10:05AM a small slowness blip was noticed on the e3 Processing servers but it was nothing out of the ordinary. App Dynamics and Auto Ops looks stabilized. Bob Gram from Unix team has joined the bridge call to assist with the trouble shooting efforts. Unix engineer performed TCP capture on the Unix server to baseline connection from EID database to DC1. Network engineer reported that the baseline looked good and did not identify any issue. E3 latency issues have stabilized and are holding steady and E3 is processing normally. The bridge call has been shut down but teams continue to monitor for stability. Bridge call will be reconvened if reoccurrence happens later today and or over the weekend. Also, we will reconvene on the same bridge on Monday (6/18) at 0830 ET to resume discussion/investigation into the issue in case no problems are detected over this weekend. | NA | NA | Duty Officer’s, ICE EID and EDME Lan team.EMOC | Agents were unable to access application | NA | NA | NA | NA | NA | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 6/14/18 09:40 | 6/15/18 11:25 | 1:45 | NA | | E3 Support has received notification from the field that they are experiencing latency issues throughout the E3 Applications. There is currently a bridge call in process with Duty Officer’s, ICE EID and EDME Lan team. The Duty Officers are in the process of reaching out to the EMOC to join the bridge call. E3 will provided updates as they come. Please see Bridge call information below. | | OBP;#OFO | | | 6/15/18 09:20 | Agents in the field | 11109296 | Yes | NA |
10:48am - Mark with ICE joined the bridge call
10:59am - CBP engineering team is looking for security scans that may be in process, EID is looking into back up Jobs, E3 is awaiting for emoc to Join the bridge call to perform a trace rout
10:59am - DC1 confirmed the latency is picking up at 10.10.72.136 at hop 9, hops 9 through 16 are showing 30 milliseconds. hop 1 through 8 were 2-3 milliseconds Hop 9 is outside of DC1 .
11:02am - Duty officers are reaching to ONET to Join the bridge call
11:04am - lars session is going to start the process of recycling intake
11:11am - The TOC Joined the bridge call
11:12am - Terry from e3 joins bridge call.
11:14am - TOC (?) Asked if impact as same as yesterday, Only site reporting slowness is McAllen. AO portal is showing green 100%
11:16am - e3 asked if Duty Officer reached out to anyone at OneNet, they are still in the process of contacting.
11:19am - (Wes) e3 Still waiting on DHS OneNet resource to join bridge.
11:27am - still trying to get a DHS OneNet resource on e3 Bridge (waiting 25 minutes)
11:32am - Wes asked SCM if Biometrics cluster needed to be restarted, SCM concurred. McAllen stated they were having issues in which modules (Biometrics & Intake) Both clusters being restarted.
11:36am - Jake with e3 stepping away temporarily.
11:39am - Max capacity hit with bridge callers, OneNet cannot join, several e3 users dropping call.
11:41am - Maryann joins Bridge. A device has been highlighted where the latency has started, tracing route of device (NOC & EDME on call) No latency found on EDME side.
11:46am - Questioned asked if EDME LAN firewall was blocking anything & EDME has no firewall. Asked if switches have been checked. DC1 is checking 10.10.72.136 (router)
11:50am - EMOC sees routers bouncing back & forth on destination traffic from source.
11:53am - Wes asked if we can find out if a replication service is running on network. Being checked.
11:55am - issues seem to be coming from CBP block 10.159.112.7 on down and it’s bouncing. Problem seems to be in routing. Trying to get routing resources on bridge.
11:58am - EDME Lan confirm switch is operating properly on this side.
12:02pm - Asked if ISD is on, Problem is not at the DMC confirmed by NOC. Still trying to get to get routing group on call.. Could NAT’ing be the problem?
12:06pm - Jake confirms they are trying to get a routing resource.
12:07pm - CBP NOC stepping away.
12:10pm - Stating that the problem is somewhere at the NDC. Wes asked to know if we could network flow analyze the bandwidth being used. APP Dynamics showed problem started around 9:30am this morning & it looks like there is a multipath problem @NDC.
12:17pm - Traceroute has narrowed down issue to hop 9 at 10.10.72.136 (resides at OneNet not NDC) is being checked.
12:21pm - 10.10.72.136 is Crypto A & 10.10.72.137 is Crypto B. e3 is asking for confirmation by NOC.
12:27pm - The NOC cannot access these devices, only EDME.
12:30 PM- Server group sending info to EDME Lan group.
12:33 PM- now awaiting for traceroute from source to destination to get done. Asking if there was any slowness on Server at NDC, Server group not seeing it. Nothing is over resourced. Traffic very minimal on server
12:38 PM- Jose V unable to access APPD. E3 is investigating. Jake Bumbrey states everything is in the green EXCEPT Biometrics.
12.42pm - A Ping test was performed and the results came back into the RED.
12:43 PM- Jose is asking if anyone has other trouble shooting techniques to remedy this latency issue. Ping test into the RED on AOPortal. No code changes on the CBP side.
DC1 states that the circuit is the weakest link. They really need netflow to diagnose these types of issues.
12:47 PM- Normal latency time is 4.5 sec, Average has been 8 seconds with spike up to 11 running over the last half hour. Need to find bandwidth hog.
12:52 PM- Something seems to be running that no one knows. Sonny with EDME Lan asked if anyone has checked the WAN router. Who has rights to WAN router.
12:54 PM- DC Crypto A router connect to the WAN router then into OneNet.
12:58p PM- Now latency checking is being done on the WAN router
1:01 PM-ALL traffic is going thru this router on a 5Gb circuit, not just CBP traffic.
1:07 PM-Wesley & Megan from routing group haven’t found any routing issues but are trying to check to see if something is running in background.
1:10 PM-no latency being found. Want to check response time from network to ICE server & back. Asking who can do TCB dump on the ICE side to check server latency.
1:13 PM-BEIB U asking what’s going on. Donna joins call Ricky from IST has all the traceroutes & hasn’t found any issues. Reaching out for any assistance.
1:23 PM-Need DC1 people on call to access device 10.16.0.18 & do a traceroute. 10.16.36.85 is an unknown device.
1:25 PM-Wes at e3 asked if there was high database usage on the ICE side. Nothing out of the normal was found.
1:35 PM-e3 called RGC to find out why they are over capacity due to e3 or volume & it’s a volume issue, NOT e3. They are processing just fine.
1:41 PM-50% drop in CBP packets found & engineers are investigating.
1:53 PM- Packets are dropping at the ICE interface firewall.
2:00 PM-e3 support member Tigist Joined the bridge call
2:13 PM engineer still trouble shooting and asking
2: 20 PM-DC1 stated both Switch 193 port and interface is looking good
2:25 PM-Duty officer ask to e3 support to reach out to Weslaco and Tucson to ask about the performance
2:30 PM-E3 support gave out the performance Weslaco and Tucson has no issue | | | Second Occurrence - Situational Awareness: Latency Throughout e3 Modules | | E3 | Resolved: E3 Support & CBP engineers continue to troubleshoot. No new calls or tickets were being received from the field experiencing issues from the latency. Around 10:05AM a small slowness blip was noticed on the e3 Processing servers but it was nothing out of the ordinary. App Dynamics and Auto Ops looks stabilized. Bob Gram from Unix team has joined the bridge call to assist with the trouble shooting efforts. Unix engineer performed TCP capture on the Unix server to baseline connection from EID database to DC1. Network engineer reported that the baseline looked good and did not identify any issue. E3 latency issues have stabilized and are holding steady and E3 is processing normally. The bridge call has been shut down but teams continue to monitor for stability. Bridge call will be reconvened if reoccurrence happens later today and or over the weekend. Also, we will reconvene on the same bridge on Monday (6/18) at 0830 ET to resume discussion/investigation into the issue in case no problems are detected over this weekend. | NA | NA | Duty Officer’s, ICE EID and EDME Lan team.EMOC | Agents were unable to access application | NA | NA | NA | NA | NA | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 6/14/18 09:40 | 6/15/18 11:25 | 1:45 | NA | | E3 Support has received notification from the field that they are experiencing latency issues throughout the E3 Applications. There is currently a bridge call in process with Duty Officer’s, ICE EID and EDME Lan team. The Duty Officers are in the process of reaching out to the EMOC to join the bridge call. E3 will provided updates as they come. Please see Bridge call information below. | | OBP;#OFO | | | 6/15/18 09:20 | Agents in the field | 11109296 | Yes | NA |
10:48am - Mark with ICE joined the bridge call
10:59am - CBP engineering team is looking for security scans that may be in process, EID is looking into back up Jobs, E3 is awaiting for emoc to Join the bridge call to perform a trace rout
10:59am - DC1 confirmed the latency is picking up at 10.10.72.136 at hop 9, hops 9 through 16 are showing 30 milliseconds. hop 1 through 8 were 2-3 milliseconds Hop 9 is outside of DC1 .
11:02am - Duty officers are reaching to ONET to Join the bridge call
11:04am - lars session is going to start the process of recycling intake
11:11am - The TOC Joined the bridge call
11:12am - Terry from e3 joins bridge call.
11:14am - TOC (?) Asked if impact as same as yesterday, Only site reporting slowness is McAllen. AO portal is showing green 100%
11:16am - e3 asked if Duty Officer reached out to anyone at OneNet, they are still in the process of contacting.
11:19am - (Wes) e3 Still waiting on DHS OneNet resource to join bridge.
11:27am - still trying to get a DHS OneNet resource on e3 Bridge (waiting 25 minutes)
11:32am - Wes asked SCM if Biometrics cluster needed to be restarted, SCM concurred. McAllen stated they were having issues in which modules (Biometrics & Intake) Both clusters being restarted.
11:36am - Jake with e3 stepping away temporarily.
11:39am - Max capacity hit with bridge callers, OneNet cannot join, several e3 users dropping call.
11:41am - Maryann joins Bridge. A device has been highlighted where the latency has started, tracing route of device (NOC & EDME on call) No latency found on EDME side.
11:46am - Questioned asked if EDME LAN firewall was blocking anything & EDME has no firewall. Asked if switches have been checked. DC1 is checking 10.10.72.136 (router)
11:50am - EMOC sees routers bouncing back & forth on destination traffic from source.
11:53am - Wes asked if we can find out if a replication service is running on network. Being checked.
11:55am - issues seem to be coming from CBP block 10.159.112.7 on down and it’s bouncing. Problem seems to be in routing. Trying to get routing resources on bridge.
11:58am - EDME Lan confirm switch is operating properly on this side.
12:02pm - Asked if ISD is on, Problem is not at the DMC confirmed by NOC. Still trying to get to get routing group on call.. Could NAT’ing be the problem?
12:06pm - Jake confirms they are trying to get a routing resource.
12:07pm - CBP NOC stepping away.
12:10pm - Stating that the problem is somewhere at the NDC. Wes asked to know if we could network flow analyze the bandwidth being used. APP Dynamics showed problem started around 9:30am this morning & it looks like there is a multipath problem @NDC.
12:17pm - Traceroute has narrowed down issue to hop 9 at 10.10.72.136 (resides at OneNet not NDC) is being checked.
12:21pm - 10.10.72.136 is Crypto A & 10.10.72.137 is Crypto B. e3 is asking for confirmation by NOC.
12:27pm - The NOC cannot access these devices, only EDME.
12:30 PM- Server group sending info to EDME Lan group.
12:33 PM- now awaiting for traceroute from source to destination to get done. Asking if there was any slowness on Server at NDC, Server group not seeing it. Nothing is over resourced. Traffic very minimal on server
12:38 PM- Jose V unable to access APPD. E3 is investigating. Jake Bumbrey states everything is in the green EXCEPT Biometrics.
12.42pm - A Ping test was performed and the results came back into the RED.
12:43 PM- Jose is asking if anyone has other trouble shooting techniques to remedy this latency issue. Ping test into the RED on AOPortal. No code changes on the CBP side.
DC1 states that the circuit is the weakest link. They really need netflow to diagnose these types of issues.
12:47 PM- Normal latency time is 4.5 sec, Average has been 8 seconds with spike up to 11 running over the last half hour. Need to find bandwidth hog.
12:52 PM- Something seems to be running that no one knows. Sonny with EDME Lan asked if anyone has checked the WAN router. Who has rights to WAN router.
12:54 PM- DC Crypto A router connect to the WAN router then into OneNet.
12:58p PM- Now latency checking is being done on the WAN router
1:01 PM-ALL traffic is going thru this router on a 5Gb circuit, not just CBP traffic.
1:07 PM-Wesley & Megan from routing group haven’t found any routing issues but are trying to check to see if something is running in background.
1:10 PM-no latency being found. Want to check response time from network to ICE server & back. Asking who can do TCB dump on the ICE side to check server latency.
1:13 PM-BEIB U asking what’s going on. Donna joins call Ricky from IST has all the traceroutes & hasn’t found any issues. Reaching out for any assistance.
1:23 PM-Need DC1 people on call to access device 10.16.0.18 & do a traceroute. 10.16.36.85 is an unknown device.
1:25 PM-Wes at e3 asked if there was high database usage on the ICE side. Nothing out of the normal was found.
1:35 PM-e3 called RGC to find out why they are over capacity due to e3 or volume & it’s a volume issue, NOT e3. They are processing just fine.
1:41 PM-50% drop in CBP packets found & engineers are investigating.
1:53 PM- Packets are dropping at the ICE interface firewall.
2:00 PM-e3 support member Tigist Joined the bridge call
2:13 PM engineer still trouble shooting and asking
2: 20 PM-DC1 stated both Switch 193 port and interface is looking good
2:25 PM-Duty officer ask to e3 support to reach out to Weslaco and Tucson to ask about the performance
2:30 PM-E3 support gave out the performance Weslaco and Tucson has no issue | | | Second Occurrence - Situational Awareness: Latency Throughout e3 Modules | | E3 | Resolved: E3 Support & CBP engineers continue to troubleshoot. No new calls or tickets were being received from the field experiencing issues from the latency. Around 10:05AM a small slowness blip was noticed on the e3 Processing servers but it was nothing out of the ordinary. App Dynamics and Auto Ops looks stabilized. Bob Gram from Unix team has joined the bridge call to assist with the trouble shooting efforts. Unix engineer performed TCP capture on the Unix server to baseline connection from EID database to DC1. Network engineer reported that the baseline looked good and did not identify any issue. E3 latency issues have stabilized and are holding steady and E3 is processing normally. The bridge call has been shut down but teams continue to monitor for stability. Bridge call will be reconvened if reoccurrence happens later today and or over the weekend. Also, we will reconvene on the same bridge on Monday (6/18) at 0830 ET to resume discussion/investigation into the issue in case no problems are detected over this weekend. | NA | NA | Duty Officer’s, ICE EID and EDME Lan team.EMOC | Agents were unable to access application | NA | NA | NA | NA | NA | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Assault | 6/13/18 15:55 | 6/13/18 20:40 | 4:45 | NA | | : E3 Support has received notification from the field that they are experiencing latency issues throughout the E3 Applications. There is currently a bridge call in process with Duty Officer’s. E3 software configuration team is in the process of recycling e3 services, to ensure e3 applications are stable. The Duty Officers are in the process of reaching out to the NOC and ICE DBA’s to see if they are experiencing any issues. E3 will provided updates as they come. Please see Bridge call information below. | | OBP;#OFO | | | 6/13/18 15:55 | Agents in the field | 11120979 | Yes | NA | NA | | | First Occurrence Situational Awareness: latency throughout e3 modules | | E3 | E3 Software Configuration Manager has confirmed that the response time for e3 applications are within norms. ICE DBA engineers were not able to identify any issue with their database. E3 applications are operational and E3 Support will monitor throughout the remainder of the night. We will reopen the bridge call in the morning and follow up with the engineers to get status check and progress where we stand. | NA | NA | ICE DBA,, E3 Software configuration manager, Duty Officer | Agent were unable to access e3 | NA | NA | NA | NA | NA | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 6/13/18 15:55 | 6/13/18 20:40 | 4:45 | NA | | : E3 Support has received notification from the field that they are experiencing latency issues throughout the E3 Applications. There is currently a bridge call in process with Duty Officer’s. E3 software configuration team is in the process of recycling e3 services, to ensure e3 applications are stable. The Duty Officers are in the process of reaching out to the NOC and ICE DBA’s to see if they are experiencing any issues. E3 will provided updates as they come. Please see Bridge call information below. | | OBP;#OFO | | | 6/13/18 15:55 | Agents in the field | 11120979 | Yes | NA | NA | | | First Occurrence Situational Awareness: latency throughout e3 modules | | E3 | E3 Software Configuration Manager has confirmed that the response time for e3 applications are within norms. ICE DBA engineers were not able to identify any issue with their database. E3 applications are operational and E3 Support will monitor throughout the remainder of the night. We will reopen the bridge call in the morning and follow up with the engineers to get status check and progress where we stand. | NA | NA | ICE DBA,, E3 Software configuration manager, Duty Officer | Agent were unable to access e3 | NA | NA | NA | NA | NA | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 6/13/18 15:55 | 6/13/18 20:40 | 4:45 | NA | | : E3 Support has received notification from the field that they are experiencing latency issues throughout the E3 Applications. There is currently a bridge call in process with Duty Officer’s. E3 software configuration team is in the process of recycling e3 services, to ensure e3 applications are stable. The Duty Officers are in the process of reaching out to the NOC and ICE DBA’s to see if they are experiencing any issues. E3 will provided updates as they come. Please see Bridge call information below. | | OBP;#OFO | | | 6/13/18 15:55 | Agents in the field | 11120979 | Yes | NA | NA | | | First Occurrence Situational Awareness: latency throughout e3 modules | | E3 | E3 Software Configuration Manager has confirmed that the response time for e3 applications are within norms. ICE DBA engineers were not able to identify any issue with their database. E3 applications are operational and E3 Support will monitor throughout the remainder of the night. We will reopen the bridge call in the morning and follow up with the engineers to get status check and progress where we stand. | NA | NA | ICE DBA,, E3 Software configuration manager, Duty Officer | Agent were unable to access e3 | NA | NA | NA | NA | NA | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 5/17/18 03:00 | 5/17/18 05:00 | 2:00 | N/A | N/A | ICE/EID Maintenance for Thursday, May 17th, 2018
Implementation Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Outage Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Purpose:
ICE will be performing maintenance on the EID database.
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
Recommended Actions by Customers:
Upon completion of the maintenance activities the e3 Homepage (https://e3-p.cbp.dhs.gov/e3Home/index.jsp) will be updated.
Reporting Problems or Questions:
All users should report any issues related to this maintenance change to the CBP Technology Service Desk at 1-800-927-8729 or email e3 Support ate3Support@cbp.dhs.gov and reference CBP Ticket# 10603466 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10603466 | No | | | 5/17/18 05:00 | 5/17/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 5/17/18 03:00 | 5/17/18 05:00 | 2:00 | N/A | N/A | ICE/EID Maintenance for Thursday, May 17th, 2018
Implementation Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Outage Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Purpose:
ICE will be performing maintenance on the EID database.
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
Recommended Actions by Customers:
Upon completion of the maintenance activities the e3 Homepage (https://e3-p.cbp.dhs.gov/e3Home/index.jsp) will be updated.
Reporting Problems or Questions:
All users should report any issues related to this maintenance change to the CBP Technology Service Desk at 1-800-927-8729 or email e3 Support ate3Support@cbp.dhs.gov and reference CBP Ticket# 10603466 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10603466 | No | | | 5/17/18 05:00 | 5/17/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Detentions | 5/17/18 03:00 | 5/17/18 05:00 | 2:00 | N/A | N/A | ICE/EID Maintenance for Thursday, May 17th, 2018
Implementation Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Outage Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Purpose:
ICE will be performing maintenance on the EID database.
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
Recommended Actions by Customers:
Upon completion of the maintenance activities the e3 Homepage (https://e3-p.cbp.dhs.gov/e3Home/index.jsp) will be updated.
Reporting Problems or Questions:
All users should report any issues related to this maintenance change to the CBP Technology Service Desk at 1-800-927-8729 or email e3 Support ate3Support@cbp.dhs.gov and reference CBP Ticket# 10603466 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10603466 | No | | | 5/17/18 05:00 | 5/17/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 OASISS | 5/17/18 03:00 | 5/17/18 05:00 | 2:00 | N/A | N/A | ICE/EID Maintenance for Thursday, May 17th, 2018
Implementation Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Outage Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Purpose:
ICE will be performing maintenance on the EID database.
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
Recommended Actions by Customers:
Upon completion of the maintenance activities the e3 Homepage (https://e3-p.cbp.dhs.gov/e3Home/index.jsp) will be updated.
Reporting Problems or Questions:
All users should report any issues related to this maintenance change to the CBP Technology Service Desk at 1-800-927-8729 or email e3 Support ate3Support@cbp.dhs.gov and reference CBP Ticket# 10603466 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10603466 | No | | | 5/17/18 05:00 | 5/17/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Processing | 5/17/18 03:00 | 5/17/18 05:00 | 2:00 | N/A | N/A | ICE/EID Maintenance for Thursday, May 17th, 2018
Implementation Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Outage Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Purpose:
ICE will be performing maintenance on the EID database.
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
Recommended Actions by Customers:
Upon completion of the maintenance activities the e3 Homepage (https://e3-p.cbp.dhs.gov/e3Home/index.jsp) will be updated.
Reporting Problems or Questions:
All users should report any issues related to this maintenance change to the CBP Technology Service Desk at 1-800-927-8729 or email e3 Support ate3Support@cbp.dhs.gov and reference CBP Ticket# 10603466 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10603466 | No | | | 5/17/18 05:00 | 5/17/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Prosecutions | 5/17/18 03:00 | 5/17/18 05:00 | 2:00 | N/A | N/A | ICE/EID Maintenance for Thursday, May 17th, 2018
Implementation Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Outage Date/Time: Thursday May 17, 3:00 am (EST) – Thursday May 17, 5:00 am (EST)
Purpose:
ICE will be performing maintenance on the EID database.
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
Recommended Actions by Customers:
Upon completion of the maintenance activities the e3 Homepage (https://e3-p.cbp.dhs.gov/e3Home/index.jsp) will be updated.
Reporting Problems or Questions:
All users should report any issues related to this maintenance change to the CBP Technology Service Desk at 1-800-927-8729 or email e3 Support ate3Support@cbp.dhs.gov and reference CBP Ticket# 10603466 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10603466 | No | | | 5/17/18 05:00 | 5/17/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Assault | 4/19/18 03:00 | 4/19/18 05:00 | 2:00 | N/A | N/A | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10216173 | Yes | | | 4/19/18 05:00 | 4/19/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 4/19/18 03:00 | 4/19/18 05:00 | 2:00 | N/A | N/A | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10216173 | Yes | | | 4/19/18 05:00 | 4/19/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 4/19/18 03:00 | 4/19/18 05:00 | 2:00 | N/A | N/A | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10216173 | Yes | | | 4/19/18 05:00 | 4/19/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Detentions | 4/19/18 03:00 | 4/19/18 05:00 | 2:00 | N/A | N/A | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10216173 | Yes | | | 4/19/18 05:00 | 4/19/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 OASISS | 4/19/18 03:00 | 4/19/18 05:00 | 2:00 | N/A | N/A | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10216173 | Yes | | | 4/19/18 05:00 | 4/19/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Processing | 4/19/18 03:00 | 4/19/18 05:00 | 2:00 | N/A | N/A | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10216173 | Yes | | | 4/19/18 05:00 | 4/19/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Prosecutions | 4/19/18 03:00 | 4/19/18 05:00 | 2:00 | N/A | N/A | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 10216173 | Yes | | | 4/19/18 05:00 | 4/19/18 03:00 | ICE EID Production Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 4/10/18 03:20 | 4/10/18 09:40 | 6:20 | N/A | | e3 Support was notified by the Technology Operations Center (TOC) that the e3 IXMservices servers were experiencing errors from 30% to 80% affecting e3 S_E , S_O, IDENT & TPRS Transactions. After further investigation E3 Software configuration managers have identified PRD issue with IXM Services due to a possible EID database running out of space on an LOB update. Two of the four IXM Service apps in Prod started having issues at approximately 3:00 AM this morning. E3 Support has reached out to the ICE EID team to check if table space can be added. A bridge call has been established with CBP Duty Officers, TOC, e3 Software Configuration Engineers and the ICE DBA team to investigate issues. | | OBP;#OFO;#OFO/SIGMA | | | 4/10/18 04:20 | Technology Operations Center | 10139233 | Yes | N/A | 4/10/2018
4:10 AM -e3 support received an alert from the Technology Operation Center (TOC) for the following IXM server clusters P0025 & P0026 spewing out errors at a close to 50% (1/2) rate.
4:18 AM-The TSD called the e3 support line requesting a response to the alert that was sent via email.
4:19 AM -E3 support member responded to the alert at with a request to recycle the IXM server. Due to no responses from the EDME Web Team, e3 support engaged the CBP Duty Officers to escalate to EDME Web Services that we would need a rolling recycle of the e3 IXM services.
4:47 AM- The Enterprise Operations Center (EOC) responded to e3 support’s email that EDME Web Services would handle the request.
4:54 AM -EDME started the rolling recycle for IXM servers e3EXTService_bemms-p0025, bemms-p0026, bemms-p0027 and bemms-p0030.
5:05 AM -The recycle was completed and there were no more reports of any issues with the IXM Server.
7:00 AM -e3 support checked AppDynamics to ensure services were still operating in a normal fashion, and noticed errors were being returned. E3 support engaged the E3 Software Configuration team to investigate further. After further investigation our Software Config team was unable to determine the source of errors, therefore escalated the issues to e3 Developer.
7:46 AM-E3 Software Developers identified that there was a PRD issue with IXM Services due to a possible EID database running out of space on an LOB update. Two of the four IXM Service apps in Prod started having issues at approximately 3:00 AM this morning.
A8:32 AM- e3 Developers reached out to the ICE EID team to check if table space could be added.
9:29 AM - E3 Support contacted the CBP duty officers to establish a bridge call with the TOC, e3 Software Configuration Engineers and the ICE DBA team.
9:38 AM-the ICE DBA team corrected all IXM Issues impacting e3 Biometric Search Enroll (SE) and Search Only (SO) transactions by adding additional space to the IXM Request and response tables. E3 Software Configuration Engineers confirmed alerts cleared within the AppDynamics showing a 22% to 0% drop in the logs and the issue was considered resolved. | | | Errors on IXM Services server erroring out 30 to 80% | | JABS | EID DBA’s Increased space of 223G to the IXM Request and response tables | EID Database was running out of space on a LOB update. | N/A | e3, JABS, OBIM | IXM submissions will fail since request data can’t be written to EID | N/A | EID BDA's | ? EID DBA’s Increased space of 223G to the IXM Request and response tables | Up & Running | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 4/10/18 03:20 | 4/10/18 09:40 | 6:20 | N/A | | e3 Support was notified by the Technology Operations Center (TOC) that the e3 IXMservices servers were experiencing errors from 30% to 80% affecting e3 S_E , S_O, IDENT & TPRS Transactions. After further investigation E3 Software configuration managers have identified PRD issue with IXM Services due to a possible EID database running out of space on an LOB update. Two of the four IXM Service apps in Prod started having issues at approximately 3:00 AM this morning. E3 Support has reached out to the ICE EID team to check if table space can be added. A bridge call has been established with CBP Duty Officers, TOC, e3 Software Configuration Engineers and the ICE DBA team to investigate issues. | | OBP;#OFO;#OFO/SIGMA | | | 4/10/18 04:20 | Technology Operations Center | 10139233 | Yes | N/A | 4/10/2018
4:10 AM -e3 support received an alert from the Technology Operation Center (TOC) for the following IXM server clusters P0025 & P0026 spewing out errors at a close to 50% (1/2) rate.
4:18 AM-The TSD called the e3 support line requesting a response to the alert that was sent via email.
4:19 AM -E3 support member responded to the alert at with a request to recycle the IXM server. Due to no responses from the EDME Web Team, e3 support engaged the CBP Duty Officers to escalate to EDME Web Services that we would need a rolling recycle of the e3 IXM services.
4:47 AM- The Enterprise Operations Center (EOC) responded to e3 support’s email that EDME Web Services would handle the request.
4:54 AM -EDME started the rolling recycle for IXM servers e3EXTService_bemms-p0025, bemms-p0026, bemms-p0027 and bemms-p0030.
5:05 AM -The recycle was completed and there were no more reports of any issues with the IXM Server.
7:00 AM -e3 support checked AppDynamics to ensure services were still operating in a normal fashion, and noticed errors were being returned. E3 support engaged the E3 Software Configuration team to investigate further. After further investigation our Software Config team was unable to determine the source of errors, therefore escalated the issues to e3 Developer.
7:46 AM-E3 Software Developers identified that there was a PRD issue with IXM Services due to a possible EID database running out of space on an LOB update. Two of the four IXM Service apps in Prod started having issues at approximately 3:00 AM this morning.
A8:32 AM- e3 Developers reached out to the ICE EID team to check if table space could be added.
9:29 AM - E3 Support contacted the CBP duty officers to establish a bridge call with the TOC, e3 Software Configuration Engineers and the ICE DBA team.
9:38 AM-the ICE DBA team corrected all IXM Issues impacting e3 Biometric Search Enroll (SE) and Search Only (SO) transactions by adding additional space to the IXM Request and response tables. E3 Software Configuration Engineers confirmed alerts cleared within the AppDynamics showing a 22% to 0% drop in the logs and the issue was considered resolved. | | | Errors on IXM Services server erroring out 30 to 80% | | JABS | EID DBA’s Increased space of 223G to the IXM Request and response tables | EID Database was running out of space on a LOB update. | N/A | e3, JABS, OBIM | IXM submissions will fail since request data can’t be written to EID | N/A | EID BDA's | ? EID DBA’s Increased space of 223G to the IXM Request and response tables | Up & Running | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Detentions | 4/4/18 10:35 | 4/4/18 15:15 | 4:40 | | | E3 Support received notification from the Technology Operations Center (TOC) that their engineers have observed JDBC Connection Pool Runtime alerts for the e3 Detentions application. At approx. 11:23 AM E3 Software Configuration Manager, has restarted the e3 Detentions application servers in a rolling fashion. Following the restart of the application the TOC advised that they were still receiving alerts in AppDymanics for e3 Detentions. Software Configuration Engineers from e3 have engaged ICE EID Development team technical manager to investigate issues on their side. E3 support is currently in the process of starting a bridge call to remediate the extreme slowness seen within the Detention application. Bridge call info will be provided in the next update. | | OBP | | | 4/4/18 10:35 | Technology Operations Center (TOC) | 10106890 | Yes | | 10:38am APPD response times showing at 65600 ms, first email today.
1:32pm Bridge call spun up by Duty Officers, joined by SCM & e3 Support.
1:37pm TOC joined, Sharing of e3 Checklist
1:38pm Bibhu Sharma (Major Incident Manager) joins call
1:42pm Crystal from the NOC joins.
1:47pm Vidyi Joins Bridge.
1:50pm NOC running through e3 Checklist, Baskhar from EWS joined, Gregory From NOC, DAC on bridge,
1:54pm Confirmation that all e3 modules are not affected, only e3 Detentions is being reported on.
1:57pm Vikram from ICE on call, asked to verify if they have any issues on their side.
1:59pm Rochelle joined call & asks if normal e3 strategies have been followed.
2:02pm EID DBA to join shortly, Detentions response up to 220,500ms now.
2:04pm Scott from NOC joins, Vikram explains email that Jake hasn’t received yet. Problems with email.
2:06pm Oracle 12 patch is involved with the slowness on ICE side, scheduling of patch is the issue (2 weeks) needs to be escalated to top EID
2:06pm Notification from McAllen that they are down in e3 Detentions. Trying to kill current sessions so Detentions farm can be restarted. Blocking sessions
2:11pm Issue escalated up to the ICE CIO for patching problem. ICE patching group isn’t ready for immediate solution rollout.
2:15pm e3 Detentions Site Down page implemented.
2:20pm Blocking sessions killed, recycle of e3 detentions farm being done.
2:26pm Detentions site back up, checking with all sites that reported problems
2:31pm After recycle blocking sessions are going right back up (27000ms)
2:33pm CBP NOC joiners leave bridge. E3 stated that EID must find a way to single patch their system
2:38pm Blocking sessions keep generating so ICE will have to keep running that script.
2:40pm Wes Gould, PM of e3 states that waiting 10 days is going to be untenable. Camilla stated that they need to get all project sponsor to approve update being moved up & what is involved
ICE is calling Oracle to get help in this issue, All ICE SA are in training this week, & will need SA’s to do the patching & testing. Testing schedule should be done by the 7th of April.
2:46pm Huge latency is still being observed but ICE states it’s from the ICE Eagle side. | | | Slowness Issue Within e3 Detentions Application | ICE EID | ICE/EID | The temporary fix until tomorrow night is to kill all of the blocked sessions & recycle the e3 Detentions server farm one more time. ICE will monitor along with e3 to make sure latency doesn't become an issue before the patch is done (starting at 7pm 4/5/18). | Upon further investigation ICE DBA's found a slew of blocked sessions in their database from e3 bookings mirroring the same experience ICE had on 4/3/18 & last week with different schemas.. Oracle Support confirmed that it's a known bug and that the database needs a patch (which was to be installed on 4/14/18). Knowing that the situation as it stood was going to be untenable, e3 asked ICE to follow their procedures and move up the patching schedule. The temporary fix until tomorrow night is to kill all of the blocked sessions & recycle the e3 Detentions server farm one more time. ICE will monitor along with e3 to make sure latency doesn't become an issue before the patch is done (starting at 7pm 4/5/18). | ICE EID | ICE EID | e3 Detentions is currently up and accessible, user accessing the application will experience extremely slow processing speeds. E3 support has received a total of 7 tickets reporting slowness in Detentions including ( 2 McAllen, 1 El Paso, 1 Tucson, 1 Carrizo Springs, 1 Nogales). | N/A | ICE EID | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Biometrics | 3/26/18 11:05 | 3/26/18 13:25 | 2:20 | | | Upon investigation, it appears that there are latency issues connecting to the EID database which in turn is causing connectivity issue to all e3 modules. E3 Support is reaching out to Duty Officers to setup a bridge call. | | OBP;#OFO | | | 3/26/18 11:05 | TSD | 10060824 | Yes | | 10:59 Spike with connectivity to EID first noticed, 144 sec system wide.
11:03 e3 noticed major latency in connecting to the EID database causing
11:13 Bridge started with e3, SCM & Duty Officer
11:25 Fungol (?) from ice joined call, e3 explained major latency spikes to ICE. No backups during the day. Is checking backup schedule.
11:28 SCM going to recycle e3 Detentions servers because of its warning state.
11:30 Duty Officers now have EID after
11:31 Brent joined call,
11:32 CBP NOC joined call (Audra)
11:36 Jackie from ASD joined call. E3 PM Wes Gould wants to get to bottom of latency issue because of weekend issues
11:37 Audra confirmed spikes on the EID side
11:38 RAY from the NOC joined call. APPD is showed another spike 11:38. Fongol found issues with EARM EID application causing latency. Wes Gould trying to find owner of ICE app to possibly have app killed. EID engineers in a call right now. Asking if the user account could be the cause of yesterday’s issues as well.
11:46 EARM APP 1 is up to 162 user sessions at this time.
11:50 Confirmed the EID database e3 connects to is affected by the EARM APP 1 sessions. Some underlying sessions are blocking the EARM app.
11:54 EAGLE app in SIGMA is blocking other users. Trying to get ICE DBA’s to remedy issue.
12:02 Duty Officer asked if ICE is still seeing blocked sessions on their side & they are but it varies
12:03 There is definitely a concurrency issue with e3 & SIGMA users access database. How is this being resolved.
12:03 Another large spike observed by e3
12:06 May need to get the SIGMA folks on bridge
12:12 ICE is trying to investigate the Oracle database issues
12:15 No one else but e3 is being affected by this latency issue
12:17 No CR’s implemented by e3 that could have caused this
12:19 Lars from the SCM group logging off because of internet issues.
12:20 ICE engineers could not find any issues on the connectivity side as to why. E3 is reaching out to SIGMA to join call. Also reaching out to network.
12:33 e3 PM Wes talking to SIGMA PM Mike at the moment.
12:37 GOV Emp Table programs could be causing blocking issue (IMM). Even though they have been working for months.
12:46 ICE is putting in a request to restart ICE EARM servers,
12:54 ICE engineers are seeing a session that may be hanging on a “Select Statement”, trying to kill just this session which started yesterday.
13:04 ICE Engineers think session may have disappeared, checking to see if it is still peaking / spiking.
13:11 Cleared session seems to have remedied issue, testing going on at the moments. Duty officers asking how do we find out next time before it goes system wide. System apps are loading normally without the latency.
13:16 e3 Inquiring about which select statement was it causing the issue. No definitive answer. The select statement gathered information on improving performance……… (Library cache was locked)
13:25 Bridge shut down | | | Latency Issue With e3 Modules | ICE EID | ICE/EID | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. The EID schema service was locked in service, running since yesterday (approximately 1700 minutes) and we believe this to be what caused the latency issues. The EID team has terminated this schema service account and response times within production have since then returned back to normal ranges. The EID team is currently reassessing how they gather these statics so that it will be less impactful in the future to our production applications. E3 Support will continue to monitor our application performance for the remainder of the day to ensure that our applications stay up and functional. | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. | ICE EID | NOC/EID DBA | When launching e3 Application e3 modules will not load. | | ICE EID | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 FPQ | 3/26/18 11:05 | 3/26/18 13:25 | 2:20 | | | Upon investigation, it appears that there are latency issues connecting to the EID database which in turn is causing connectivity issue to all e3 modules. E3 Support is reaching out to Duty Officers to setup a bridge call. | | OBP;#OFO | | | 3/26/18 11:05 | TSD | 10060824 | Yes | | 10:59 Spike with connectivity to EID first noticed, 144 sec system wide.
11:03 e3 noticed major latency in connecting to the EID database causing
11:13 Bridge started with e3, SCM & Duty Officer
11:25 Fungol (?) from ice joined call, e3 explained major latency spikes to ICE. No backups during the day. Is checking backup schedule.
11:28 SCM going to recycle e3 Detentions servers because of its warning state.
11:30 Duty Officers now have EID after
11:31 Brent joined call,
11:32 CBP NOC joined call (Audra)
11:36 Jackie from ASD joined call. E3 PM Wes Gould wants to get to bottom of latency issue because of weekend issues
11:37 Audra confirmed spikes on the EID side
11:38 RAY from the NOC joined call. APPD is showed another spike 11:38. Fongol found issues with EARM EID application causing latency. Wes Gould trying to find owner of ICE app to possibly have app killed. EID engineers in a call right now. Asking if the user account could be the cause of yesterday’s issues as well.
11:46 EARM APP 1 is up to 162 user sessions at this time.
11:50 Confirmed the EID database e3 connects to is affected by the EARM APP 1 sessions. Some underlying sessions are blocking the EARM app.
11:54 EAGLE app in SIGMA is blocking other users. Trying to get ICE DBA’s to remedy issue.
12:02 Duty Officer asked if ICE is still seeing blocked sessions on their side & they are but it varies
12:03 There is definitely a concurrency issue with e3 & SIGMA users access database. How is this being resolved.
12:03 Another large spike observed by e3
12:06 May need to get the SIGMA folks on bridge
12:12 ICE is trying to investigate the Oracle database issues
12:15 No one else but e3 is being affected by this latency issue
12:17 No CR’s implemented by e3 that could have caused this
12:19 Lars from the SCM group logging off because of internet issues.
12:20 ICE engineers could not find any issues on the connectivity side as to why. E3 is reaching out to SIGMA to join call. Also reaching out to network.
12:33 e3 PM Wes talking to SIGMA PM Mike at the moment.
12:37 GOV Emp Table programs could be causing blocking issue (IMM). Even though they have been working for months.
12:46 ICE is putting in a request to restart ICE EARM servers,
12:54 ICE engineers are seeing a session that may be hanging on a “Select Statement”, trying to kill just this session which started yesterday.
13:04 ICE Engineers think session may have disappeared, checking to see if it is still peaking / spiking.
13:11 Cleared session seems to have remedied issue, testing going on at the moments. Duty officers asking how do we find out next time before it goes system wide. System apps are loading normally without the latency.
13:16 e3 Inquiring about which select statement was it causing the issue. No definitive answer. The select statement gathered information on improving performance……… (Library cache was locked)
13:25 Bridge shut down | | | Latency Issue With e3 Modules | ICE EID | ICE/EID | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. The EID schema service was locked in service, running since yesterday (approximately 1700 minutes) and we believe this to be what caused the latency issues. The EID team has terminated this schema service account and response times within production have since then returned back to normal ranges. The EID team is currently reassessing how they gather these statics so that it will be less impactful in the future to our production applications. E3 Support will continue to monitor our application performance for the remainder of the day to ensure that our applications stay up and functional. | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. | ICE EID | NOC/EID DBA | When launching e3 Application e3 modules will not load. | | ICE EID | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Detentions | 3/26/18 11:05 | 3/26/18 13:25 | 2:20 | | | Upon investigation, it appears that there are latency issues connecting to the EID database which in turn is causing connectivity issue to all e3 modules. E3 Support is reaching out to Duty Officers to setup a bridge call. | | OBP;#OFO | | | 3/26/18 11:05 | TSD | 10060824 | Yes | | 10:59 Spike with connectivity to EID first noticed, 144 sec system wide.
11:03 e3 noticed major latency in connecting to the EID database causing
11:13 Bridge started with e3, SCM & Duty Officer
11:25 Fungol (?) from ice joined call, e3 explained major latency spikes to ICE. No backups during the day. Is checking backup schedule.
11:28 SCM going to recycle e3 Detentions servers because of its warning state.
11:30 Duty Officers now have EID after
11:31 Brent joined call,
11:32 CBP NOC joined call (Audra)
11:36 Jackie from ASD joined call. E3 PM Wes Gould wants to get to bottom of latency issue because of weekend issues
11:37 Audra confirmed spikes on the EID side
11:38 RAY from the NOC joined call. APPD is showed another spike 11:38. Fongol found issues with EARM EID application causing latency. Wes Gould trying to find owner of ICE app to possibly have app killed. EID engineers in a call right now. Asking if the user account could be the cause of yesterday’s issues as well.
11:46 EARM APP 1 is up to 162 user sessions at this time.
11:50 Confirmed the EID database e3 connects to is affected by the EARM APP 1 sessions. Some underlying sessions are blocking the EARM app.
11:54 EAGLE app in SIGMA is blocking other users. Trying to get ICE DBA’s to remedy issue.
12:02 Duty Officer asked if ICE is still seeing blocked sessions on their side & they are but it varies
12:03 There is definitely a concurrency issue with e3 & SIGMA users access database. How is this being resolved.
12:03 Another large spike observed by e3
12:06 May need to get the SIGMA folks on bridge
12:12 ICE is trying to investigate the Oracle database issues
12:15 No one else but e3 is being affected by this latency issue
12:17 No CR’s implemented by e3 that could have caused this
12:19 Lars from the SCM group logging off because of internet issues.
12:20 ICE engineers could not find any issues on the connectivity side as to why. E3 is reaching out to SIGMA to join call. Also reaching out to network.
12:33 e3 PM Wes talking to SIGMA PM Mike at the moment.
12:37 GOV Emp Table programs could be causing blocking issue (IMM). Even though they have been working for months.
12:46 ICE is putting in a request to restart ICE EARM servers,
12:54 ICE engineers are seeing a session that may be hanging on a “Select Statement”, trying to kill just this session which started yesterday.
13:04 ICE Engineers think session may have disappeared, checking to see if it is still peaking / spiking.
13:11 Cleared session seems to have remedied issue, testing going on at the moments. Duty officers asking how do we find out next time before it goes system wide. System apps are loading normally without the latency.
13:16 e3 Inquiring about which select statement was it causing the issue. No definitive answer. The select statement gathered information on improving performance……… (Library cache was locked)
13:25 Bridge shut down | | | Latency Issue With e3 Modules | ICE EID | ICE/EID | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. The EID schema service was locked in service, running since yesterday (approximately 1700 minutes) and we believe this to be what caused the latency issues. The EID team has terminated this schema service account and response times within production have since then returned back to normal ranges. The EID team is currently reassessing how they gather these statics so that it will be less impactful in the future to our production applications. E3 Support will continue to monitor our application performance for the remainder of the day to ensure that our applications stay up and functional. | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. | ICE EID | NOC/EID DBA | When launching e3 Application e3 modules will not load. | | ICE EID | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Processing | 3/26/18 11:05 | 3/26/18 13:25 | 2:20 | | | Upon investigation, it appears that there are latency issues connecting to the EID database which in turn is causing connectivity issue to all e3 modules. E3 Support is reaching out to Duty Officers to setup a bridge call. | | OBP;#OFO | | | 3/26/18 11:05 | TSD | 10060824 | Yes | | 10:59 Spike with connectivity to EID first noticed, 144 sec system wide.
11:03 e3 noticed major latency in connecting to the EID database causing
11:13 Bridge started with e3, SCM & Duty Officer
11:25 Fungol (?) from ice joined call, e3 explained major latency spikes to ICE. No backups during the day. Is checking backup schedule.
11:28 SCM going to recycle e3 Detentions servers because of its warning state.
11:30 Duty Officers now have EID after
11:31 Brent joined call,
11:32 CBP NOC joined call (Audra)
11:36 Jackie from ASD joined call. E3 PM Wes Gould wants to get to bottom of latency issue because of weekend issues
11:37 Audra confirmed spikes on the EID side
11:38 RAY from the NOC joined call. APPD is showed another spike 11:38. Fongol found issues with EARM EID application causing latency. Wes Gould trying to find owner of ICE app to possibly have app killed. EID engineers in a call right now. Asking if the user account could be the cause of yesterday’s issues as well.
11:46 EARM APP 1 is up to 162 user sessions at this time.
11:50 Confirmed the EID database e3 connects to is affected by the EARM APP 1 sessions. Some underlying sessions are blocking the EARM app.
11:54 EAGLE app in SIGMA is blocking other users. Trying to get ICE DBA’s to remedy issue.
12:02 Duty Officer asked if ICE is still seeing blocked sessions on their side & they are but it varies
12:03 There is definitely a concurrency issue with e3 & SIGMA users access database. How is this being resolved.
12:03 Another large spike observed by e3
12:06 May need to get the SIGMA folks on bridge
12:12 ICE is trying to investigate the Oracle database issues
12:15 No one else but e3 is being affected by this latency issue
12:17 No CR’s implemented by e3 that could have caused this
12:19 Lars from the SCM group logging off because of internet issues.
12:20 ICE engineers could not find any issues on the connectivity side as to why. E3 is reaching out to SIGMA to join call. Also reaching out to network.
12:33 e3 PM Wes talking to SIGMA PM Mike at the moment.
12:37 GOV Emp Table programs could be causing blocking issue (IMM). Even though they have been working for months.
12:46 ICE is putting in a request to restart ICE EARM servers,
12:54 ICE engineers are seeing a session that may be hanging on a “Select Statement”, trying to kill just this session which started yesterday.
13:04 ICE Engineers think session may have disappeared, checking to see if it is still peaking / spiking.
13:11 Cleared session seems to have remedied issue, testing going on at the moments. Duty officers asking how do we find out next time before it goes system wide. System apps are loading normally without the latency.
13:16 e3 Inquiring about which select statement was it causing the issue. No definitive answer. The select statement gathered information on improving performance……… (Library cache was locked)
13:25 Bridge shut down | | | Latency Issue With e3 Modules | ICE EID | ICE/EID | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. The EID schema service was locked in service, running since yesterday (approximately 1700 minutes) and we believe this to be what caused the latency issues. The EID team has terminated this schema service account and response times within production have since then returned back to normal ranges. The EID team is currently reassessing how they gather these statics so that it will be less impactful in the future to our production applications. E3 Support will continue to monitor our application performance for the remainder of the day to ensure that our applications stay up and functional. | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. | ICE EID | NOC/EID DBA | When launching e3 Application e3 modules will not load. | | ICE EID | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Prosecutions | 3/26/18 11:05 | 3/26/18 13:25 | 2:20 | | | Upon investigation, it appears that there are latency issues connecting to the EID database which in turn is causing connectivity issue to all e3 modules. E3 Support is reaching out to Duty Officers to setup a bridge call. | | OBP;#OFO | | | 3/26/18 11:05 | TSD | 10060824 | Yes | | 10:59 Spike with connectivity to EID first noticed, 144 sec system wide.
11:03 e3 noticed major latency in connecting to the EID database causing
11:13 Bridge started with e3, SCM & Duty Officer
11:25 Fungol (?) from ice joined call, e3 explained major latency spikes to ICE. No backups during the day. Is checking backup schedule.
11:28 SCM going to recycle e3 Detentions servers because of its warning state.
11:30 Duty Officers now have EID after
11:31 Brent joined call,
11:32 CBP NOC joined call (Audra)
11:36 Jackie from ASD joined call. E3 PM Wes Gould wants to get to bottom of latency issue because of weekend issues
11:37 Audra confirmed spikes on the EID side
11:38 RAY from the NOC joined call. APPD is showed another spike 11:38. Fongol found issues with EARM EID application causing latency. Wes Gould trying to find owner of ICE app to possibly have app killed. EID engineers in a call right now. Asking if the user account could be the cause of yesterday’s issues as well.
11:46 EARM APP 1 is up to 162 user sessions at this time.
11:50 Confirmed the EID database e3 connects to is affected by the EARM APP 1 sessions. Some underlying sessions are blocking the EARM app.
11:54 EAGLE app in SIGMA is blocking other users. Trying to get ICE DBA’s to remedy issue.
12:02 Duty Officer asked if ICE is still seeing blocked sessions on their side & they are but it varies
12:03 There is definitely a concurrency issue with e3 & SIGMA users access database. How is this being resolved.
12:03 Another large spike observed by e3
12:06 May need to get the SIGMA folks on bridge
12:12 ICE is trying to investigate the Oracle database issues
12:15 No one else but e3 is being affected by this latency issue
12:17 No CR’s implemented by e3 that could have caused this
12:19 Lars from the SCM group logging off because of internet issues.
12:20 ICE engineers could not find any issues on the connectivity side as to why. E3 is reaching out to SIGMA to join call. Also reaching out to network.
12:33 e3 PM Wes talking to SIGMA PM Mike at the moment.
12:37 GOV Emp Table programs could be causing blocking issue (IMM). Even though they have been working for months.
12:46 ICE is putting in a request to restart ICE EARM servers,
12:54 ICE engineers are seeing a session that may be hanging on a “Select Statement”, trying to kill just this session which started yesterday.
13:04 ICE Engineers think session may have disappeared, checking to see if it is still peaking / spiking.
13:11 Cleared session seems to have remedied issue, testing going on at the moments. Duty officers asking how do we find out next time before it goes system wide. System apps are loading normally without the latency.
13:16 e3 Inquiring about which select statement was it causing the issue. No definitive answer. The select statement gathered information on improving performance……… (Library cache was locked)
13:25 Bridge shut down | | | Latency Issue With e3 Modules | ICE EID | ICE/EID | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. The EID schema service was locked in service, running since yesterday (approximately 1700 minutes) and we believe this to be what caused the latency issues. The EID team has terminated this schema service account and response times within production have since then returned back to normal ranges. The EID team is currently reassessing how they gather these statics so that it will be less impactful in the future to our production applications. E3 Support will continue to monitor our application performance for the remainder of the day to ensure that our applications stay up and functional. | ICE identified the EID schema service account as the culprit. The purpose of this account was to collect table statistics to improve database performance later on down the road. | ICE EID | NOC/EID DBA | When launching e3 Application e3 modules will not load. | | ICE EID | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Detentions | 3/25/18 16:25 | 3/25/18 17:25 | 1:00 | N/A | | On Sun 3/25/2018 9:23 AM e3 support received alerts from the TOC for a RESOURCE_POOL_LIMIT reached for BEMSD-E3Detention and BEMSD-E3Intake. At 9:45 AM e3 support investigated the alerts for both applications and confirmed all alerts cleared at 9:51 AM. At approx. 4:32 PM e3 support received another alert notification from the Technology Operations Center (TOC) for RESOURCE_POOL_LIMIT for e3 Detentions and Intake. At 5:08 PM EDME recycled the servers, e3 support confirmed all applications were accessible, and the JACC confirmed with all reporting sites that e3 core applications were accessible. This issue was related to major response time spikes between 8:00 AM and 9:40 AM and another at 4:00 PM and 5:00 PM for all the e3 applications. The spike at 5:00 PM were massive, 200.000 ms so response time were 200 seconds. Connections to EID was slow, so the Data Source resource pools were maxed out causing the hung threads. The issue was impacting all the applications at the same time but have since cleared following the recycle. | | OBP;#OFO | | | 3/25/18 16:30 | TOC | 10059420 | Yes | N/A | Sun 3/25/2018 9:23 AM - APPD - BEMSD-E3Detention / Event: RESOURCE_POOL_LIMIT: BEMSD-E3Detention Application for the last 1 minute(s): / INC000010059079
Sun 3/25/2018 9:34 AM - BEMSD-E3Intake (RESOURCE_POOL_LIMIT): This policy was
Sun 3/25/2018 9:45 AM -e3 support investigate EMSD-E3Intake (RESOURCE_POOL_LIMIT):
Sun 3/25/2018 9:46 AM- e3 support investigate - APPD - BEMSD-E3Detention / Event: RESOURCE_POOL_LIMIT: BEMSD-E3Detention Application for the last 1 minute(s): / INC000010059079
Sun 3/25/2018 9:51 AM- e3 support confirmed all alerts cleared - BEMSD-E3Detention /
Sun 3/25/2018 9:57 AM - e3 support confirmed all alerts cleared EMSD-E3Intake (RESOURCE_POOL_LIMIT)
Sun 3/25/2018 4:32 PM - EMSD-E3Detention / Event: RESOURCE_POOL_LIMIT: BEM
Sun 3/25/2018 4:46 PM- APPD - BEMSD-E3Detention / Event: RESOURCE_POOL_LIMIT
Sun 3/25/2018 4:54 PM - Hari Chirukuri Technology Operations Center – request to recycle
Sun 3/25/2018 4:55 PM – Keith Turner approved
Sun 3/25/2018 5:08 PM - Hari Chirukuri Technology Operations Center start recycle
Sun 3/25/2018 5:18 PM - The following all instances are restarted.
e3_detention_bemms-p008_ms1
e3_detention_bemms-p010_ms1
e3_detention_bemms-p012_ms1
e3_processing_bemms-p010_ms1
Sun 3/25/2018 5:21 PM – e3 support verifying process after recycle
Sun 3/25/2018 5:27 PM – e3 support confirmed all e3 core applications up and running
Sun 3/25/2018 6:05 PM – JACC confirmed with all reporting sites that issue has cleared
| | | e3 Connection Issues | | E3 | EWS performed a rolling restart on the E3 servers. | Connections to EID was slow, so the Data Source resource pools were maxed out causing the hung threads. | N/A | N/A | E3 Timing out - Users are unable to log into E3 Application / Intermitting | No | EWS | N/A | N/A | e3_detention_bemms-p008_ms1, e3_detention_bemms-p010_ms1, e3_detention_bemms-p012_ms1, e3_processing_bemms-p010_ms1 | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 3/25/18 16:25 | 3/25/18 17:25 | 1:00 | N/A | | On Sun 3/25/2018 9:23 AM e3 support received alerts from the TOC for a RESOURCE_POOL_LIMIT reached for BEMSD-E3Detention and BEMSD-E3Intake. At 9:45 AM e3 support investigated the alerts for both applications and confirmed all alerts cleared at 9:51 AM. At approx. 4:32 PM e3 support received another alert notification from the Technology Operations Center (TOC) for RESOURCE_POOL_LIMIT for e3 Detentions and Intake. At 5:08 PM EDME recycled the servers, e3 support confirmed all applications were accessible, and the JACC confirmed with all reporting sites that e3 core applications were accessible. This issue was related to major response time spikes between 8:00 AM and 9:40 AM and another at 4:00 PM and 5:00 PM for all the e3 applications. The spike at 5:00 PM were massive, 200.000 ms so response time were 200 seconds. Connections to EID was slow, so the Data Source resource pools were maxed out causing the hung threads. The issue was impacting all the applications at the same time but have since cleared following the recycle. | | OBP;#OFO | | | 3/25/18 16:30 | TOC | 10059420 | Yes | N/A | Sun 3/25/2018 9:23 AM - APPD - BEMSD-E3Detention / Event: RESOURCE_POOL_LIMIT: BEMSD-E3Detention Application for the last 1 minute(s): / INC000010059079
Sun 3/25/2018 9:34 AM - BEMSD-E3Intake (RESOURCE_POOL_LIMIT): This policy was
Sun 3/25/2018 9:45 AM -e3 support investigate EMSD-E3Intake (RESOURCE_POOL_LIMIT):
Sun 3/25/2018 9:46 AM- e3 support investigate - APPD - BEMSD-E3Detention / Event: RESOURCE_POOL_LIMIT: BEMSD-E3Detention Application for the last 1 minute(s): / INC000010059079
Sun 3/25/2018 9:51 AM- e3 support confirmed all alerts cleared - BEMSD-E3Detention /
Sun 3/25/2018 9:57 AM - e3 support confirmed all alerts cleared EMSD-E3Intake (RESOURCE_POOL_LIMIT)
Sun 3/25/2018 4:32 PM - EMSD-E3Detention / Event: RESOURCE_POOL_LIMIT: BEM
Sun 3/25/2018 4:46 PM- APPD - BEMSD-E3Detention / Event: RESOURCE_POOL_LIMIT
Sun 3/25/2018 4:54 PM - Hari Chirukuri Technology Operations Center – request to recycle
Sun 3/25/2018 4:55 PM – Keith Turner approved
Sun 3/25/2018 5:08 PM - Hari Chirukuri Technology Operations Center start recycle
Sun 3/25/2018 5:18 PM - The following all instances are restarted.
e3_detention_bemms-p008_ms1
e3_detention_bemms-p010_ms1
e3_detention_bemms-p012_ms1
e3_processing_bemms-p010_ms1
Sun 3/25/2018 5:21 PM – e3 support verifying process after recycle
Sun 3/25/2018 5:27 PM – e3 support confirmed all e3 core applications up and running
Sun 3/25/2018 6:05 PM – JACC confirmed with all reporting sites that issue has cleared
| | | e3 Connection Issues | | E3 | EWS performed a rolling restart on the E3 servers. | Connections to EID was slow, so the Data Source resource pools were maxed out causing the hung threads. | N/A | N/A | E3 Timing out - Users are unable to log into E3 Application / Intermitting | No | EWS | N/A | N/A | e3_detention_bemms-p008_ms1, e3_detention_bemms-p010_ms1, e3_detention_bemms-p012_ms1, e3_processing_bemms-p010_ms1 | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Assault | 3/22/18 11:00 | 3/22/18 16:00 | 5:00 | N/A | WR_7227 e3 – Training (EDU) environment upgrade to match PRD code baseline | Following are the planned changes:
Application updates- Deploy the latest EAR and WAR files to bring the training environment (EDU) to match Production
Script to update the Config DB for Applications
There will be no Systems Acceptance Testing (SAT) sign off for EDU
| e3 Biometrics | OBP;#OFO | | | | | 10034964 | Yes | | | 3/22/18 16:00 | 3/22/18 11:00 | e3 EDU Configuration Maintenance | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 3/22/18 11:00 | 3/22/18 16:00 | 5:00 | N/A | WR_7227 e3 – Training (EDU) environment upgrade to match PRD code baseline | Following are the planned changes:
Application updates- Deploy the latest EAR and WAR files to bring the training environment (EDU) to match Production
Script to update the Config DB for Applications
There will be no Systems Acceptance Testing (SAT) sign off for EDU
| e3 Biometrics | OBP;#OFO | | | | | 10034964 | Yes | | | 3/22/18 16:00 | 3/22/18 11:00 | e3 EDU Configuration Maintenance | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 3/22/18 11:00 | 3/22/18 16:00 | 5:00 | N/A | WR_7227 e3 – Training (EDU) environment upgrade to match PRD code baseline | Following are the planned changes:
Application updates- Deploy the latest EAR and WAR files to bring the training environment (EDU) to match Production
Script to update the Config DB for Applications
There will be no Systems Acceptance Testing (SAT) sign off for EDU
| e3 Biometrics | OBP;#OFO | | | | | 10034964 | Yes | | | 3/22/18 16:00 | 3/22/18 11:00 | e3 EDU Configuration Maintenance | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Detentions | 3/22/18 11:00 | 3/22/18 16:00 | 5:00 | N/A | WR_7227 e3 – Training (EDU) environment upgrade to match PRD code baseline | Following are the planned changes:
Application updates- Deploy the latest EAR and WAR files to bring the training environment (EDU) to match Production
Script to update the Config DB for Applications
There will be no Systems Acceptance Testing (SAT) sign off for EDU
| e3 Biometrics | OBP;#OFO | | | | | 10034964 | Yes | | | 3/22/18 16:00 | 3/22/18 11:00 | e3 EDU Configuration Maintenance | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 OASISS | 3/22/18 11:00 | 3/22/18 16:00 | 5:00 | N/A | WR_7227 e3 – Training (EDU) environment upgrade to match PRD code baseline | Following are the planned changes:
Application updates- Deploy the latest EAR and WAR files to bring the training environment (EDU) to match Production
Script to update the Config DB for Applications
There will be no Systems Acceptance Testing (SAT) sign off for EDU
| e3 Biometrics | OBP;#OFO | | | | | 10034964 | Yes | | | 3/22/18 16:00 | 3/22/18 11:00 | e3 EDU Configuration Maintenance | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Processing | 3/22/18 11:00 | 3/22/18 16:00 | 5:00 | N/A | WR_7227 e3 – Training (EDU) environment upgrade to match PRD code baseline | Following are the planned changes:
Application updates- Deploy the latest EAR and WAR files to bring the training environment (EDU) to match Production
Script to update the Config DB for Applications
There will be no Systems Acceptance Testing (SAT) sign off for EDU
| e3 Biometrics | OBP;#OFO | | | | | 10034964 | Yes | | | 3/22/18 16:00 | 3/22/18 11:00 | e3 EDU Configuration Maintenance | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Prosecutions | 3/22/18 11:00 | 3/22/18 16:00 | 5:00 | N/A | WR_7227 e3 – Training (EDU) environment upgrade to match PRD code baseline | Following are the planned changes:
Application updates- Deploy the latest EAR and WAR files to bring the training environment (EDU) to match Production
Script to update the Config DB for Applications
There will be no Systems Acceptance Testing (SAT) sign off for EDU
| e3 Biometrics | OBP;#OFO | | | | | 10034964 | Yes | | | 3/22/18 16:00 | 3/22/18 11:00 | e3 EDU Configuration Maintenance | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Assault | 3/15/18 02:50 | 3/15/18 05:30 | 2:40 | N/A | N/A | Purpose:
The release will include the following new e3 application code and changes (ie. enhancements, bug fixes, and configurations)
• WR_7193 – e3 FPQ2 - Add IRS-NG Super Query link on subject details screen
• WR_7195 – e3 Mobile - Updates to cache export and camera/fingerprint capture UI
• WR_7196 – e3 Biometrics - Fix to address missing fingerprints from image cache
• WR_7194 – Admin App - Framework improvements and new features
• WR_6966 – e3 Prosecutions NexGen – new feature deployment
• WR_7210 – e3 Prosecutions – Update immigration disposition LOV
o Add Event Number to report & ECF page
• WR_7211 – e3 Guide Analysis – Fix queries to not use faulty Oracle function (numtoyminterval)
• WR_7212 – e3 Intake – New VR Processing Pathway
• WR_7213 – e3 Processing – Changes to I-770/I-826 forms and fix open narrative in MS Word
• WR_7214 – TEI – Update the business logic for TEI data load
• WR_7215 – e3 – All system database account password change (ICE JIRA Ticket# ESOPS-3200) | e3 Biometrics | OBP;#OFO | | | | | 9982077 | Yes | | | 3/15/18 05:30 | 3/15/18 02:50 | e3 Application | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 3/15/18 02:50 | 3/15/18 05:30 | 2:40 | N/A | N/A | Purpose:
The release will include the following new e3 application code and changes (ie. enhancements, bug fixes, and configurations)
• WR_7193 – e3 FPQ2 - Add IRS-NG Super Query link on subject details screen
• WR_7195 – e3 Mobile - Updates to cache export and camera/fingerprint capture UI
• WR_7196 – e3 Biometrics - Fix to address missing fingerprints from image cache
• WR_7194 – Admin App - Framework improvements and new features
• WR_6966 – e3 Prosecutions NexGen – new feature deployment
• WR_7210 – e3 Prosecutions – Update immigration disposition LOV
o Add Event Number to report & ECF page
• WR_7211 – e3 Guide Analysis – Fix queries to not use faulty Oracle function (numtoyminterval)
• WR_7212 – e3 Intake – New VR Processing Pathway
• WR_7213 – e3 Processing – Changes to I-770/I-826 forms and fix open narrative in MS Word
• WR_7214 – TEI – Update the business logic for TEI data load
• WR_7215 – e3 – All system database account password change (ICE JIRA Ticket# ESOPS-3200) | e3 Biometrics | OBP;#OFO | | | | | 9982077 | Yes | | | 3/15/18 05:30 | 3/15/18 02:50 | e3 Application | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 3/15/18 02:50 | 3/15/18 05:30 | 2:40 | N/A | N/A | Purpose:
The release will include the following new e3 application code and changes (ie. enhancements, bug fixes, and configurations)
• WR_7193 – e3 FPQ2 - Add IRS-NG Super Query link on subject details screen
• WR_7195 – e3 Mobile - Updates to cache export and camera/fingerprint capture UI
• WR_7196 – e3 Biometrics - Fix to address missing fingerprints from image cache
• WR_7194 – Admin App - Framework improvements and new features
• WR_6966 – e3 Prosecutions NexGen – new feature deployment
• WR_7210 – e3 Prosecutions – Update immigration disposition LOV
o Add Event Number to report & ECF page
• WR_7211 – e3 Guide Analysis – Fix queries to not use faulty Oracle function (numtoyminterval)
• WR_7212 – e3 Intake – New VR Processing Pathway
• WR_7213 – e3 Processing – Changes to I-770/I-826 forms and fix open narrative in MS Word
• WR_7214 – TEI – Update the business logic for TEI data load
• WR_7215 – e3 – All system database account password change (ICE JIRA Ticket# ESOPS-3200) | e3 Biometrics | OBP;#OFO | | | | | 9982077 | Yes | | | 3/15/18 05:30 | 3/15/18 02:50 | e3 Application | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Detentions | 3/15/18 02:50 | 3/15/18 05:30 | 2:40 | N/A | N/A | Purpose:
The release will include the following new e3 application code and changes (ie. enhancements, bug fixes, and configurations)
• WR_7193 – e3 FPQ2 - Add IRS-NG Super Query link on subject details screen
• WR_7195 – e3 Mobile - Updates to cache export and camera/fingerprint capture UI
• WR_7196 – e3 Biometrics - Fix to address missing fingerprints from image cache
• WR_7194 – Admin App - Framework improvements and new features
• WR_6966 – e3 Prosecutions NexGen – new feature deployment
• WR_7210 – e3 Prosecutions – Update immigration disposition LOV
o Add Event Number to report & ECF page
• WR_7211 – e3 Guide Analysis – Fix queries to not use faulty Oracle function (numtoyminterval)
• WR_7212 – e3 Intake – New VR Processing Pathway
• WR_7213 – e3 Processing – Changes to I-770/I-826 forms and fix open narrative in MS Word
• WR_7214 – TEI – Update the business logic for TEI data load
• WR_7215 – e3 – All system database account password change (ICE JIRA Ticket# ESOPS-3200) | e3 Biometrics | OBP;#OFO | | | | | 9982077 | Yes | | | 3/15/18 05:30 | 3/15/18 02:50 | e3 Application | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 OASISS | 3/15/18 02:50 | 3/15/18 05:30 | 2:40 | N/A | N/A | Purpose:
The release will include the following new e3 application code and changes (ie. enhancements, bug fixes, and configurations)
• WR_7193 – e3 FPQ2 - Add IRS-NG Super Query link on subject details screen
• WR_7195 – e3 Mobile - Updates to cache export and camera/fingerprint capture UI
• WR_7196 – e3 Biometrics - Fix to address missing fingerprints from image cache
• WR_7194 – Admin App - Framework improvements and new features
• WR_6966 – e3 Prosecutions NexGen – new feature deployment
• WR_7210 – e3 Prosecutions – Update immigration disposition LOV
o Add Event Number to report & ECF page
• WR_7211 – e3 Guide Analysis – Fix queries to not use faulty Oracle function (numtoyminterval)
• WR_7212 – e3 Intake – New VR Processing Pathway
• WR_7213 – e3 Processing – Changes to I-770/I-826 forms and fix open narrative in MS Word
• WR_7214 – TEI – Update the business logic for TEI data load
• WR_7215 – e3 – All system database account password change (ICE JIRA Ticket# ESOPS-3200) | e3 Biometrics | OBP;#OFO | | | | | 9982077 | Yes | | | 3/15/18 05:30 | 3/15/18 02:50 | e3 Application | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Processing | 3/15/18 02:50 | 3/15/18 05:30 | 2:40 | N/A | N/A | Purpose:
The release will include the following new e3 application code and changes (ie. enhancements, bug fixes, and configurations)
• WR_7193 – e3 FPQ2 - Add IRS-NG Super Query link on subject details screen
• WR_7195 – e3 Mobile - Updates to cache export and camera/fingerprint capture UI
• WR_7196 – e3 Biometrics - Fix to address missing fingerprints from image cache
• WR_7194 – Admin App - Framework improvements and new features
• WR_6966 – e3 Prosecutions NexGen – new feature deployment
• WR_7210 – e3 Prosecutions – Update immigration disposition LOV
o Add Event Number to report & ECF page
• WR_7211 – e3 Guide Analysis – Fix queries to not use faulty Oracle function (numtoyminterval)
• WR_7212 – e3 Intake – New VR Processing Pathway
• WR_7213 – e3 Processing – Changes to I-770/I-826 forms and fix open narrative in MS Word
• WR_7214 – TEI – Update the business logic for TEI data load
• WR_7215 – e3 – All system database account password change (ICE JIRA Ticket# ESOPS-3200) | e3 Biometrics | OBP;#OFO | | | | | 9982077 | Yes | | | 3/15/18 05:30 | 3/15/18 02:50 | e3 Application | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Prosecutions | 3/15/18 02:50 | 3/15/18 05:30 | 2:40 | N/A | N/A | Purpose:
The release will include the following new e3 application code and changes (ie. enhancements, bug fixes, and configurations)
• WR_7193 – e3 FPQ2 - Add IRS-NG Super Query link on subject details screen
• WR_7195 – e3 Mobile - Updates to cache export and camera/fingerprint capture UI
• WR_7196 – e3 Biometrics - Fix to address missing fingerprints from image cache
• WR_7194 – Admin App - Framework improvements and new features
• WR_6966 – e3 Prosecutions NexGen – new feature deployment
• WR_7210 – e3 Prosecutions – Update immigration disposition LOV
o Add Event Number to report & ECF page
• WR_7211 – e3 Guide Analysis – Fix queries to not use faulty Oracle function (numtoyminterval)
• WR_7212 – e3 Intake – New VR Processing Pathway
• WR_7213 – e3 Processing – Changes to I-770/I-826 forms and fix open narrative in MS Word
• WR_7214 – TEI – Update the business logic for TEI data load
• WR_7215 – e3 – All system database account password change (ICE JIRA Ticket# ESOPS-3200) | e3 Biometrics | OBP;#OFO | | | | | 9982077 | Yes | | | 3/15/18 05:30 | 3/15/18 02:50 | e3 Application | | E3 | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Assault | 3/7/18 08:20 | 3/7/18 11:45 | 3:25 | | | All e3 application including e3 homepage is inaccessible which looks like it could be due to a network issue. Users unable to access all e3 applications. | | OBP;#OFO | | | 3/7/18 08:20 | TSD | 9969124 | Yes | | As investigation into this issue continues, it appears there is an issue with both SSO Servers preventing agents in the field from accessing all e3 applications and TOMIS. It appears that one of the SSO Servers had a storage issue that caused the server to go down but at this time we are unsure what caused the second SSO server to go down but per WSG, we need to get one of the servers up before we can begin troubleshooting the second server. WSG is currently troubleshooting this issue and trying to bring the SSO Servers back up.
Update 2: Please see the new bridge call information for this issue. The last bridge line was ended and we are using this one moving forward.
Update 3: We are currently verifying that the Restore on the SSO Servers that WSG conducted. After this work has been verified and given the ok, e3 support will recycle our e3 production environment (to include all applications) and once complete e3 support will reach out to the field to confirm if this change has restored access to our applications
As of 12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. E3 Support will continue to monitor throughout the day to ensure no issues arise. | | | All e3 Applications Inaccessible | WSG/DC1 | E3 | WSG restored the SSO ServersAt12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. | One of the SSO Servers had a storage issue that caused the server to go down. We are unsure what caused the second SSO server to go down. | DC1 | WSG, DC1 | Users in the field were unable to access all e3 applications. | | DC1/WSG | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Biometrics | 3/7/18 08:20 | 3/7/18 11:45 | 3:25 | | | All e3 application including e3 homepage is inaccessible which looks like it could be due to a network issue. Users unable to access all e3 applications. | | OBP;#OFO | | | 3/7/18 08:20 | TSD | 9969124 | Yes | | As investigation into this issue continues, it appears there is an issue with both SSO Servers preventing agents in the field from accessing all e3 applications and TOMIS. It appears that one of the SSO Servers had a storage issue that caused the server to go down but at this time we are unsure what caused the second SSO server to go down but per WSG, we need to get one of the servers up before we can begin troubleshooting the second server. WSG is currently troubleshooting this issue and trying to bring the SSO Servers back up.
Update 2: Please see the new bridge call information for this issue. The last bridge line was ended and we are using this one moving forward.
Update 3: We are currently verifying that the Restore on the SSO Servers that WSG conducted. After this work has been verified and given the ok, e3 support will recycle our e3 production environment (to include all applications) and once complete e3 support will reach out to the field to confirm if this change has restored access to our applications
As of 12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. E3 Support will continue to monitor throughout the day to ensure no issues arise. | | | All e3 Applications Inaccessible | WSG/DC1 | E3 | WSG restored the SSO ServersAt12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. | One of the SSO Servers had a storage issue that caused the server to go down. We are unsure what caused the second SSO server to go down. | DC1 | WSG, DC1 | Users in the field were unable to access all e3 applications. | | DC1/WSG | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 FPQ | 3/7/18 08:20 | 3/7/18 11:45 | 3:25 | | | All e3 application including e3 homepage is inaccessible which looks like it could be due to a network issue. Users unable to access all e3 applications. | | OBP;#OFO | | | 3/7/18 08:20 | TSD | 9969124 | Yes | | As investigation into this issue continues, it appears there is an issue with both SSO Servers preventing agents in the field from accessing all e3 applications and TOMIS. It appears that one of the SSO Servers had a storage issue that caused the server to go down but at this time we are unsure what caused the second SSO server to go down but per WSG, we need to get one of the servers up before we can begin troubleshooting the second server. WSG is currently troubleshooting this issue and trying to bring the SSO Servers back up.
Update 2: Please see the new bridge call information for this issue. The last bridge line was ended and we are using this one moving forward.
Update 3: We are currently verifying that the Restore on the SSO Servers that WSG conducted. After this work has been verified and given the ok, e3 support will recycle our e3 production environment (to include all applications) and once complete e3 support will reach out to the field to confirm if this change has restored access to our applications
As of 12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. E3 Support will continue to monitor throughout the day to ensure no issues arise. | | | All e3 Applications Inaccessible | WSG/DC1 | E3 | WSG restored the SSO ServersAt12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. | One of the SSO Servers had a storage issue that caused the server to go down. We are unsure what caused the second SSO server to go down. | DC1 | WSG, DC1 | Users in the field were unable to access all e3 applications. | | DC1/WSG | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Detentions | 3/7/18 08:20 | 3/7/18 11:45 | 3:25 | | | All e3 application including e3 homepage is inaccessible which looks like it could be due to a network issue. Users unable to access all e3 applications. | | OBP;#OFO | | | 3/7/18 08:20 | TSD | 9969124 | Yes | | As investigation into this issue continues, it appears there is an issue with both SSO Servers preventing agents in the field from accessing all e3 applications and TOMIS. It appears that one of the SSO Servers had a storage issue that caused the server to go down but at this time we are unsure what caused the second SSO server to go down but per WSG, we need to get one of the servers up before we can begin troubleshooting the second server. WSG is currently troubleshooting this issue and trying to bring the SSO Servers back up.
Update 2: Please see the new bridge call information for this issue. The last bridge line was ended and we are using this one moving forward.
Update 3: We are currently verifying that the Restore on the SSO Servers that WSG conducted. After this work has been verified and given the ok, e3 support will recycle our e3 production environment (to include all applications) and once complete e3 support will reach out to the field to confirm if this change has restored access to our applications
As of 12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. E3 Support will continue to monitor throughout the day to ensure no issues arise. | | | All e3 Applications Inaccessible | WSG/DC1 | E3 | WSG restored the SSO ServersAt12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. | One of the SSO Servers had a storage issue that caused the server to go down. We are unsure what caused the second SSO server to go down. | DC1 | WSG, DC1 | Users in the field were unable to access all e3 applications. | | DC1/WSG | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 OASISS | 3/7/18 08:20 | 3/7/18 11:45 | 3:25 | | | All e3 application including e3 homepage is inaccessible which looks like it could be due to a network issue. Users unable to access all e3 applications. | | OBP;#OFO | | | 3/7/18 08:20 | TSD | 9969124 | Yes | | As investigation into this issue continues, it appears there is an issue with both SSO Servers preventing agents in the field from accessing all e3 applications and TOMIS. It appears that one of the SSO Servers had a storage issue that caused the server to go down but at this time we are unsure what caused the second SSO server to go down but per WSG, we need to get one of the servers up before we can begin troubleshooting the second server. WSG is currently troubleshooting this issue and trying to bring the SSO Servers back up.
Update 2: Please see the new bridge call information for this issue. The last bridge line was ended and we are using this one moving forward.
Update 3: We are currently verifying that the Restore on the SSO Servers that WSG conducted. After this work has been verified and given the ok, e3 support will recycle our e3 production environment (to include all applications) and once complete e3 support will reach out to the field to confirm if this change has restored access to our applications
As of 12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. E3 Support will continue to monitor throughout the day to ensure no issues arise. | | | All e3 Applications Inaccessible | WSG/DC1 | E3 | WSG restored the SSO ServersAt12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. | One of the SSO Servers had a storage issue that caused the server to go down. We are unsure what caused the second SSO server to go down. | DC1 | WSG, DC1 | Users in the field were unable to access all e3 applications. | | DC1/WSG | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Processing | 3/7/18 08:20 | 3/7/18 11:45 | 3:25 | | | All e3 application including e3 homepage is inaccessible which looks like it could be due to a network issue. Users unable to access all e3 applications. | | OBP;#OFO | | | 3/7/18 08:20 | TSD | 9969124 | Yes | | As investigation into this issue continues, it appears there is an issue with both SSO Servers preventing agents in the field from accessing all e3 applications and TOMIS. It appears that one of the SSO Servers had a storage issue that caused the server to go down but at this time we are unsure what caused the second SSO server to go down but per WSG, we need to get one of the servers up before we can begin troubleshooting the second server. WSG is currently troubleshooting this issue and trying to bring the SSO Servers back up.
Update 2: Please see the new bridge call information for this issue. The last bridge line was ended and we are using this one moving forward.
Update 3: We are currently verifying that the Restore on the SSO Servers that WSG conducted. After this work has been verified and given the ok, e3 support will recycle our e3 production environment (to include all applications) and once complete e3 support will reach out to the field to confirm if this change has restored access to our applications
As of 12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. E3 Support will continue to monitor throughout the day to ensure no issues arise. | | | All e3 Applications Inaccessible | WSG/DC1 | E3 | WSG restored the SSO ServersAt12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. | One of the SSO Servers had a storage issue that caused the server to go down. We are unsure what caused the second SSO server to go down. | DC1 | WSG, DC1 | Users in the field were unable to access all e3 applications. | | DC1/WSG | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Prosecutions | 3/7/18 08:20 | 3/7/18 11:45 | 3:25 | | | All e3 application including e3 homepage is inaccessible which looks like it could be due to a network issue. Users unable to access all e3 applications. | | OBP;#OFO | | | 3/7/18 08:20 | TSD | 9969124 | Yes | | As investigation into this issue continues, it appears there is an issue with both SSO Servers preventing agents in the field from accessing all e3 applications and TOMIS. It appears that one of the SSO Servers had a storage issue that caused the server to go down but at this time we are unsure what caused the second SSO server to go down but per WSG, we need to get one of the servers up before we can begin troubleshooting the second server. WSG is currently troubleshooting this issue and trying to bring the SSO Servers back up.
Update 2: Please see the new bridge call information for this issue. The last bridge line was ended and we are using this one moving forward.
Update 3: We are currently verifying that the Restore on the SSO Servers that WSG conducted. After this work has been verified and given the ok, e3 support will recycle our e3 production environment (to include all applications) and once complete e3 support will reach out to the field to confirm if this change has restored access to our applications
As of 12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. E3 Support will continue to monitor throughout the day to ensure no issues arise. | | | All e3 Applications Inaccessible | WSG/DC1 | E3 | WSG restored the SSO ServersAt12:26PM, WSG has confirmed that the SSO servers has been restored completely. E3 Support has confirmed with users in the field that e3 applications are accessible. | One of the SSO Servers had a storage issue that caused the server to go down. We are unsure what caused the second SSO server to go down. | DC1 | WSG, DC1 | Users in the field were unable to access all e3 applications. | | DC1/WSG | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Assault | 3/6/18 13:45 | 3/6/18 15:50 | 2:05 | N/A | | • Incident Description and Impact Statement: E3 Support received notification from the Technology Operations Center (TOC) that their engineers have observed timeouts connecting to the ICE/EID database. E3 support contacted ICE/ EID helpdesk who advised they were not seeing any issues. Upon further investigation the TOC confirmed that there was a router down at DC1 contributing to this network slowness issue and that engineers were actively investigating. Between 3:51 PM and 6:00 PM alerts for EID connection cleared and there were no further reports of any timeouts. DC1 did not provide any further details or resolution. | | OBP;#OFO | | | 3/6/18 13:45 | Technology Operations Center (TOC) | 9972439 | Yes | N/A | N/A | | | ICE/EID database Incident Impacting e3 core applications | | ICE/EID | DC1 correct issues with router | A router was down at DC1 | | ICE / EID help Desk, CBP Duty Officer, TOC, DC1 network team | e3 applications experienced intermittent issues | N/A | DC1 | N/A | e3 applications were available although some users may have experienced intermittent issues | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 3/6/18 13:45 | 3/6/18 15:50 | 2:05 | N/A | | • Incident Description and Impact Statement: E3 Support received notification from the Technology Operations Center (TOC) that their engineers have observed timeouts connecting to the ICE/EID database. E3 support contacted ICE/ EID helpdesk who advised they were not seeing any issues. Upon further investigation the TOC confirmed that there was a router down at DC1 contributing to this network slowness issue and that engineers were actively investigating. Between 3:51 PM and 6:00 PM alerts for EID connection cleared and there were no further reports of any timeouts. DC1 did not provide any further details or resolution. | | OBP;#OFO | | | 3/6/18 13:45 | Technology Operations Center (TOC) | 9972439 | Yes | N/A | N/A | | | ICE/EID database Incident Impacting e3 core applications | | ICE/EID | DC1 correct issues with router | A router was down at DC1 | | ICE / EID help Desk, CBP Duty Officer, TOC, DC1 network team | e3 applications experienced intermittent issues | N/A | DC1 | N/A | e3 applications were available although some users may have experienced intermittent issues | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 3/6/18 13:45 | 3/6/18 15:50 | 2:05 | N/A | | • Incident Description and Impact Statement: E3 Support received notification from the Technology Operations Center (TOC) that their engineers have observed timeouts connecting to the ICE/EID database. E3 support contacted ICE/ EID helpdesk who advised they were not seeing any issues. Upon further investigation the TOC confirmed that there was a router down at DC1 contributing to this network slowness issue and that engineers were actively investigating. Between 3:51 PM and 6:00 PM alerts for EID connection cleared and there were no further reports of any timeouts. DC1 did not provide any further details or resolution. | | OBP;#OFO | | | 3/6/18 13:45 | Technology Operations Center (TOC) | 9972439 | Yes | N/A | N/A | | | ICE/EID database Incident Impacting e3 core applications | | ICE/EID | DC1 correct issues with router | A router was down at DC1 | | ICE / EID help Desk, CBP Duty Officer, TOC, DC1 network team | e3 applications experienced intermittent issues | N/A | DC1 | N/A | e3 applications were available although some users may have experienced intermittent issues | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Detentions | 3/6/18 13:45 | 3/6/18 15:50 | 2:05 | N/A | | • Incident Description and Impact Statement: E3 Support received notification from the Technology Operations Center (TOC) that their engineers have observed timeouts connecting to the ICE/EID database. E3 support contacted ICE/ EID helpdesk who advised they were not seeing any issues. Upon further investigation the TOC confirmed that there was a router down at DC1 contributing to this network slowness issue and that engineers were actively investigating. Between 3:51 PM and 6:00 PM alerts for EID connection cleared and there were no further reports of any timeouts. DC1 did not provide any further details or resolution. | | OBP;#OFO | | | 3/6/18 13:45 | Technology Operations Center (TOC) | 9972439 | Yes | N/A | N/A | | | ICE/EID database Incident Impacting e3 core applications | | ICE/EID | DC1 correct issues with router | A router was down at DC1 | | ICE / EID help Desk, CBP Duty Officer, TOC, DC1 network team | e3 applications experienced intermittent issues | N/A | DC1 | N/A | e3 applications were available although some users may have experienced intermittent issues | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 OASISS | 3/6/18 13:45 | 3/6/18 15:50 | 2:05 | N/A | | • Incident Description and Impact Statement: E3 Support received notification from the Technology Operations Center (TOC) that their engineers have observed timeouts connecting to the ICE/EID database. E3 support contacted ICE/ EID helpdesk who advised they were not seeing any issues. Upon further investigation the TOC confirmed that there was a router down at DC1 contributing to this network slowness issue and that engineers were actively investigating. Between 3:51 PM and 6:00 PM alerts for EID connection cleared and there were no further reports of any timeouts. DC1 did not provide any further details or resolution. | | OBP;#OFO | | | 3/6/18 13:45 | Technology Operations Center (TOC) | 9972439 | Yes | N/A | N/A | | | ICE/EID database Incident Impacting e3 core applications | | ICE/EID | DC1 correct issues with router | A router was down at DC1 | | ICE / EID help Desk, CBP Duty Officer, TOC, DC1 network team | e3 applications experienced intermittent issues | N/A | DC1 | N/A | e3 applications were available although some users may have experienced intermittent issues | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 3/6/18 13:45 | 3/6/18 15:50 | 2:05 | N/A | | • Incident Description and Impact Statement: E3 Support received notification from the Technology Operations Center (TOC) that their engineers have observed timeouts connecting to the ICE/EID database. E3 support contacted ICE/ EID helpdesk who advised they were not seeing any issues. Upon further investigation the TOC confirmed that there was a router down at DC1 contributing to this network slowness issue and that engineers were actively investigating. Between 3:51 PM and 6:00 PM alerts for EID connection cleared and there were no further reports of any timeouts. DC1 did not provide any further details or resolution. | | OBP;#OFO | | | 3/6/18 13:45 | Technology Operations Center (TOC) | 9972439 | Yes | N/A | N/A | | | ICE/EID database Incident Impacting e3 core applications | | ICE/EID | DC1 correct issues with router | A router was down at DC1 | | ICE / EID help Desk, CBP Duty Officer, TOC, DC1 network team | e3 applications experienced intermittent issues | N/A | DC1 | N/A | e3 applications were available although some users may have experienced intermittent issues | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Prosecutions | 3/6/18 13:45 | 3/6/18 15:50 | 2:05 | N/A | | • Incident Description and Impact Statement: E3 Support received notification from the Technology Operations Center (TOC) that their engineers have observed timeouts connecting to the ICE/EID database. E3 support contacted ICE/ EID helpdesk who advised they were not seeing any issues. Upon further investigation the TOC confirmed that there was a router down at DC1 contributing to this network slowness issue and that engineers were actively investigating. Between 3:51 PM and 6:00 PM alerts for EID connection cleared and there were no further reports of any timeouts. DC1 did not provide any further details or resolution. | | OBP;#OFO | | | 3/6/18 13:45 | Technology Operations Center (TOC) | 9972439 | Yes | N/A | N/A | | | ICE/EID database Incident Impacting e3 core applications | | ICE/EID | DC1 correct issues with router | A router was down at DC1 | | ICE / EID help Desk, CBP Duty Officer, TOC, DC1 network team | e3 applications experienced intermittent issues | N/A | DC1 | N/A | e3 applications were available although some users may have experienced intermittent issues | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 3/1/18 06:30 | 3/1/18 17:10 | 10:40 | N/A | | Incident Description and Impact Statement: Following the scheduled (OBIM) preventative infrastructure maintenance in Data Center 1 (DC1), 1 of 3 listeners in a scan listener pool of a cluster in Matcher has not come back up after switch maintenance. Currently 10P Visit and 2P matcher capability in DC1 are operating at 50% capacity. Upon checking the IDENT backlog report e3 support noticed a 63 transactions in stuck in a Transactions Processing state starting at 2:00. E3 support is currently on a bridge call with OBIM PAS team to investigate. Currently there is a backlog of 135 IDENT transactions Search Enroll and Search Only in the backlog. E3 support is currently investigating the impact to users and will provide updates as they come. | | OBP;#OFO/SIGMA | | | 3/1/18 14:30 | Internal | 9938992 | Yes | N/A | 14:19 e3 Support notices IDENT transactions backlog at 65 and growing.
14:50 ABIS backlog at 65 transactions
15:15 e3 Support confirms with OBIM that there is an issue with delayed responses & received bridge call information.
15:30 e3 Support joins bridge call.
15:40 OBIM watch desk having connectivity issues.
15:48 Rich drops from call.
15:51 Bridge Call accidentally ended.
15:53 Bridge reconvenes, wanting to know ETA of DC1 10Print coming back online. Original estimate 4 hours to bring system queues back up
16:00 Jude rejoins and report that the 10print queues are back online on DC2 and may take 4 hours to clear. The DC2 queue needs to drain first before DC1 queues can be restarted. Approximately at 20000 in the queue right now. Visitors queue (searches, Enrolls) will be brought back up at that time.
16:04 WATCHLIST search should be back online in 20-25 minutes. | | | Unplanned (OBIM) Outage Impacting IDENT Transactions | | OBIM/IDENT | Bypassed Affected listener | Lister did not come back up after planned switch maintenance. | OBIM | N/A | : e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IDENT. Users will not be able to access immigration histories or alert information of subjects in the event of an IDENT outage. | No | OBIM | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 3/1/18 06:30 | 3/1/18 17:10 | 10:40 | N/A | | Incident Description and Impact Statement: Following the scheduled (OBIM) preventative infrastructure maintenance in Data Center 1 (DC1), 1 of 3 listeners in a scan listener pool of a cluster in Matcher has not come back up after switch maintenance. Currently 10P Visit and 2P matcher capability in DC1 are operating at 50% capacity. Upon checking the IDENT backlog report e3 support noticed a 63 transactions in stuck in a Transactions Processing state starting at 2:00. E3 support is currently on a bridge call with OBIM PAS team to investigate. Currently there is a backlog of 135 IDENT transactions Search Enroll and Search Only in the backlog. E3 support is currently investigating the impact to users and will provide updates as they come. | | OBP;#OFO/SIGMA | | | 3/1/18 14:30 | Internal | 9938992 | Yes | N/A | 14:19 e3 Support notices IDENT transactions backlog at 65 and growing.
14:50 ABIS backlog at 65 transactions
15:15 e3 Support confirms with OBIM that there is an issue with delayed responses & received bridge call information.
15:30 e3 Support joins bridge call.
15:40 OBIM watch desk having connectivity issues.
15:48 Rich drops from call.
15:51 Bridge Call accidentally ended.
15:53 Bridge reconvenes, wanting to know ETA of DC1 10Print coming back online. Original estimate 4 hours to bring system queues back up
16:00 Jude rejoins and report that the 10print queues are back online on DC2 and may take 4 hours to clear. The DC2 queue needs to drain first before DC1 queues can be restarted. Approximately at 20000 in the queue right now. Visitors queue (searches, Enrolls) will be brought back up at that time.
16:04 WATCHLIST search should be back online in 20-25 minutes. | | | Unplanned (OBIM) Outage Impacting IDENT Transactions | | OBIM/IDENT | Bypassed Affected listener | Lister did not come back up after planned switch maintenance. | OBIM | N/A | : e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IDENT. Users will not be able to access immigration histories or alert information of subjects in the event of an IDENT outage. | No | OBIM | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 2/26/18 15:40 | 2/27/18 15:15 | 23:35 | N/A | | Incident Description and Impact Statement: At 3:41 pm on 2/26/2018 E3 support was notified by email that The DoD Automated Biometric Identification System (DoD ABIS) was unable to process biometric transactions in a timely manner due to being in a degraded state, impacting all sFTP transaction submitters. As of 7:09 PM on 2/26/2018 DoD Biometric Enterprise states that ABIS begin to process transactions. Due to the ABIS backlog e3 Support reached out to the ABIS Watch desk, spoke with Sarra Martin and was informed that they had to revert the server back to its former setup from 3 months ago to restore productivity. Sara stated If any changes have been made as far as transfer protocols ABIS did not have them and they would have to reach out to their engineers. E3 Support engaged ABIS government PM Mike Moral, in which he informed e3 support that all l folders on the sFTP server were wiped. DoD ABIS advised that the SFTP and its pathways cannot be solved until the next morning at the earliest, but were willing to provide a resource to join the bridge call to investigate. E3 Support established a bridge call with ABIS engineers, DataPower & the Duty Officers. Nicole from DFBA was able to see e3 servers contact the ABIS servers but no data is flowing. James Butler (DataPower) & Nicole from DFBA confirmed the ABIS sFTP server was not authenticating the username and password information from e3, so the bridge call was adjourned until 7am the next morning when more ABIS network resources were available to assist in remedying the issue. On 2/27/2018 ABIS reported seeing e3 transactions processing since 6:00 AM in the morning. Software Engineers confirmed that they are no longer seeing the 401 code blocking authentication files to the ABIS sFTP server. Due to the DoD unplanned outage, there were 979 ABIS records in a transaction processing state. Software Engineers have confirmed transactions were processing in real time, the remaining transactions in the backlog were submitted to DoD to have a response pulled. Software Engineer Jeff Sanders sped up the response timer, which helped cleared the backlog. As of 3:14 PM 2/27/2018 the ABIS backlog of transactions were at a 6 total. Software Engineers confirmed traffic continues to flow, and the backlog is now within normal range. | | OBP;#OFO | | | 2/26/18 15:00 | Defense Forensics and Biometrics Agency | 9916162 | Yes | N/A | 15:41 e3 Support notified by email that DoD ABIS is having an Unplanned Outage, due to sFTP server issues with no ETA of a resolution. ABIS engineers are currently working issue.
15:51 e3 Support sends out Situational Awareness on issue & post on Home page
16:01 e3 Calls OBIM for bridge call information (if any) – None
16:20 e3 Calls ABIS watch desk for information update (None at this time). Backlog count is 171 , 78 over SLA.
16:45 Backlog count 182, 95 over SLA
16:55 Update 1 sent
17:55 Backlog count 251 with 161 over SLA
18:00 Update 2 sent
19:09 Sarra Martin from ABIS states that ABIS is processing transactions once again
19:33 Backlog count is 328, 251 over SLA.
20:15 Backlog count is still growing, reaching out to ABIS Watchdesk.
20:25 Spoke with Sarra Martin, Problem resides with transfer protocols that may or may not be affected by the server reversion to 3 months prior. Backlog is growing & Jeff Sanders with e3 is getting involved.
20:33 Backlog Count 378 with 293 over SLA
20:45 Brandon Long is going to try & spin up a bridge call over this ABIS issue. No other information from ABIS is forthcoming.
21:40 From ABIS (Sarra Martin) Our IT Teams have been notified that you guys are having this issue. To put it plainly, the problem created today with the SFTP and its pathways cannot be solved until tomorrow morning at the earliest. I know you requested someone jump on a phone call with you guys and we have someone here that can do that who is our back-up SFTP guru. She should be getting in contact with you all soon and you can coordinate this bridge call from there.
21:50 Bridge call spun up, e3 Support & Duty Officers, waiting on ABIS representative.
22:05 Duty Officers continue to reach out to ABIS assets.
22:06 Nicole from DFBA- BIMA joined call, waiting for DataPower to join call.
22:10 Backlog Count 475, total with 356 over SLA
22:18 Jeff Sanders from e3 Joins call & gives Nicole account information.
22:23 Duty Officers so far have been unable to get anyone from DataPower to join call, 1st contact no answer, 2nd contact was off & unavailable.
22:30 No response to any of the attempts to reach DataPower personnel & have them join bridge call. Will have the Government lead attempt to contact.
22:41 James Butler from DataPower joins call.
22:42 Jackie Bunker joins call.
22:55 Datapower checks the server login & password again.
23:11 DataPower & Nicole confirm that e3 server traffic cannot validate logon to sFTP server.
23:28 Bridge Call over until tomorrow morning when more resources are available to remedy situation. | | | Situational Awareness: (DoD ABIS) Unplanned Outage affecting e3 biometrics | | DoD ABIS | ABIS had to revert the server back to its former setup from 3 months ago to restore productivity | Mike Moral, in which he informed e3 support that all l folders on the sFTP server were wiped. | ABIS | DFBA,ABIS,Duty Officer, Data Power |
Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. Agents are not required to hold subjects until ABIS returns online. The highest level supervisor at the station will be the final deciding official on the final detention disposition. | NO | ABIS | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 2/22/18 15:15 | 2/22/18 22:50 | 7:35 | N/A | | Incident Description and Impact Statement: E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. E3 support is currently investigating the impact to users and will provide updates as they come | | OBP;#OFO/SIGMA | | | 2/22/18 16:00 | OBIM (Email) | 9898721 | Yes | N/A | 2/22/18
14:55 Start of Event
15:16 OBIM notices delayed responses coming from CJIS affecting 10 print & TPRS.
16:01 OBIM notifies e3 vis email that they are seeing delayed responses from CJIS
16:15 Duty Officers call e3 Support & inquire if the issue is impacting us
16:30 Backlog count rises to 21 transactions, e3 decides to send out Situational Awareness
17:05 Gene joins the bridge, John Bassett giving update, JABS responses are returning properly. A call was made to CJIS & they responded with “We are performing “Cleanup” on the system”. Initially claimed there were no issues
17:11 Booking backlog jumped from 34 to 54 in 10 minutes. OBIM ticket # 513210
17:15 OBIM to send out notification that it is impacting Booking & S_V (We are still within SLA)
17:18 Sending over 6 TPAC transactions that are over SLA to CJIS to be worked on.
17:20 OBIM wants Denice at CJIS to be ready to receive the e3 transactions once they hit SLA. No CJIS responses in the last 30 minutes
17:24 e3 Booking backlog has grown to 69, TPRS is processing in real time.
17:28 e3 Support confirmed that OBIM is continuing to reach out to CJIS over the Bookings responses. OBIM has also confirmed that connectivity to CJIS through EMSG is operating under normal parameters.
17:36 OBIM monitors note connectivity keeps flowing from green to red & back & forth.
17:37 OBIM reached Sherry Weatherly and is sending her the JABS TCN’s (She did not know there was a CJIS issue)Duty Officers is sending CC’s to others on bridge.
17:42 John Bassett confirmed with Sherry Weatherly that CJIS is performing unscheduled maintenance & that’s the cause of this issue.
17:45 CJIS expects the Unplanned maintenance to be completed by 18:00.
17:50 e3 Booking backlog count now stands at 93, TPRS is processing in real time.
17:55 Update 2 sent
17:59 There has been no material change in responses coming from CJIS in the last 2 hours.
18:06 e3 Support member Nielab joined the bridge call.
18:07 OBIM reaching out to Sherry Weatherly from CJIS for an update
18:11 OBIM was unable to reach to Sherry Weatherly a voicemail message was left for her.
18:19 OBIM was able to reach out to (OCC) Operation Control Center for CJIS and it was reported that CJIS still working the issue and there is no ETA when services will be back up and running
18:20 OBIM started working on putting together a list of TCN to be processed by CJIS
19:25 OBIM forwarded the list of TCN to CJIS
19:37 e3 Support members received confirmation from management dropped off the bridge call. E3 Support was to join the bridge call again at 10:30 PM for status update.
23:11 OBIM has confirmed that e3 transactions are processing in real time and backlog of (BKG) booking transactions has cleared and is within normal range. CJIS continues to experience issue and engineers are onsite troubleshooting delayed responses impacting other agencies. CJIS did not provide further detail about the root cause. | | | NGI / CJIS Unplanned Outage affecting Biometrics & Processing | | NGI/CJIS | CJIS Manually put through Transactions until backlog cleared. | Unscheduled Maintenance caused Outage | CJIS | e3, OBIM, CJIS | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 2/22/18 15:15 | 2/22/18 22:50 | 7:35 | N/A | | Incident Description and Impact Statement: E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. E3 support is currently investigating the impact to users and will provide updates as they come | | OBP;#OFO/SIGMA | | | 2/22/18 16:00 | OBIM (Email) | 9898721 | Yes | N/A | 2/22/18
14:55 Start of Event
15:16 OBIM notices delayed responses coming from CJIS affecting 10 print & TPRS.
16:01 OBIM notifies e3 vis email that they are seeing delayed responses from CJIS
16:15 Duty Officers call e3 Support & inquire if the issue is impacting us
16:30 Backlog count rises to 21 transactions, e3 decides to send out Situational Awareness
17:05 Gene joins the bridge, John Bassett giving update, JABS responses are returning properly. A call was made to CJIS & they responded with “We are performing “Cleanup” on the system”. Initially claimed there were no issues
17:11 Booking backlog jumped from 34 to 54 in 10 minutes. OBIM ticket # 513210
17:15 OBIM to send out notification that it is impacting Booking & S_V (We are still within SLA)
17:18 Sending over 6 TPAC transactions that are over SLA to CJIS to be worked on.
17:20 OBIM wants Denice at CJIS to be ready to receive the e3 transactions once they hit SLA. No CJIS responses in the last 30 minutes
17:24 e3 Booking backlog has grown to 69, TPRS is processing in real time.
17:28 e3 Support confirmed that OBIM is continuing to reach out to CJIS over the Bookings responses. OBIM has also confirmed that connectivity to CJIS through EMSG is operating under normal parameters.
17:36 OBIM monitors note connectivity keeps flowing from green to red & back & forth.
17:37 OBIM reached Sherry Weatherly and is sending her the JABS TCN’s (She did not know there was a CJIS issue)Duty Officers is sending CC’s to others on bridge.
17:42 John Bassett confirmed with Sherry Weatherly that CJIS is performing unscheduled maintenance & that’s the cause of this issue.
17:45 CJIS expects the Unplanned maintenance to be completed by 18:00.
17:50 e3 Booking backlog count now stands at 93, TPRS is processing in real time.
17:55 Update 2 sent
17:59 There has been no material change in responses coming from CJIS in the last 2 hours.
18:06 e3 Support member Nielab joined the bridge call.
18:07 OBIM reaching out to Sherry Weatherly from CJIS for an update
18:11 OBIM was unable to reach to Sherry Weatherly a voicemail message was left for her.
18:19 OBIM was able to reach out to (OCC) Operation Control Center for CJIS and it was reported that CJIS still working the issue and there is no ETA when services will be back up and running
18:20 OBIM started working on putting together a list of TCN to be processed by CJIS
19:25 OBIM forwarded the list of TCN to CJIS
19:37 e3 Support members received confirmation from management dropped off the bridge call. E3 Support was to join the bridge call again at 10:30 PM for status update.
23:11 OBIM has confirmed that e3 transactions are processing in real time and backlog of (BKG) booking transactions has cleared and is within normal range. CJIS continues to experience issue and engineers are onsite troubleshooting delayed responses impacting other agencies. CJIS did not provide further detail about the root cause. | | | NGI / CJIS Unplanned Outage affecting Biometrics & Processing | | NGI/CJIS | CJIS Manually put through Transactions until backlog cleared. | Unscheduled Maintenance caused Outage | CJIS | e3, OBIM, CJIS | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 2/15/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 2/15/18 05:00 | 2/15/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 2/15/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 2/15/18 05:00 | 2/15/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Detentions | 2/15/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 2/15/18 05:00 | 2/15/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 OASISS | 2/15/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 2/15/18 05:00 | 2/15/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Processing | 2/15/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 2/15/18 05:00 | 2/15/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Prosecutions | 2/15/18 03:00 | | 3:00 | | | ICE EID Production Maintenance | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | N/A | Yes | | | 2/15/18 05:00 | 2/15/18 03:00 | ICE EID Production Maintenance | | | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Biometrics | 2/11/18 11:25 | 2/11/18 15:15 | 3:50 | #511136 - OBIM | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly.
Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | | OBP;#OFO;#OFO/SIGMA | | | 2/11/18 13:05 | OBIM, 1:05pm by Email, 1:12pm by Phone | #9843196 | Yes | N/A | 11:28am OBIM notices delayed responses coming from CJIS. Transactions have their JTID’s so the problem is with CJIS
1:05pm OBIM notifies e3 by email
1:12pm OBIM notifies e3 by phone
1:35pm Situational Awareness sent Bridge call information included
1:46 pm no known issues with CJIS after speaking with their watch Commander, 21 transactions over SLA at the moment. Going to call again if no one from CJIS joins bridge or gives OBIM an update after 20 minutes
2:06pm Bridge continues with backlog slightly dropping from high of 51 total
2:15pm OBIM reached out to CJIS watch desk & discovered that there was a ticket opened with their help desk but no one is actively working it.
2:29pm Leslie Donovan from CJIS is going to join bridge call
2:36pm Brandon Joins Bridge Call
2:38pm Counts are Total 15 in backlog, 7 over SLA
2:40pm Maryanne Duffy joins Bridge
2:50pm #511136 OBIM Ticket number, CBP ticket number #9843196
2:57pm Total overall backlog count = 12, 8 over SLA
3:00pm Leslie Donovan is running behind
3:13pm Counts are: Total = 9,
3:14pm Leslie Boyer from CJIS had come in and started working on remaining backlog in background unbeknownst to OBIM.
3:17pm OBIM confirmed they were seeing transactions processed in real time. Bridge call to shut down after sending over SLA TID’s to CJIS to be manually worked off.
3:17pm CJIS has resolved issue by manually working backlog off. | | | CJIS Delayed Responses Affecting e3 Biometrics & FPQ2 | NGI / CJIS | NGI/CJIS | Resolved: As of 3:17PM CJIS engineers resolved issue by manually working off backlog in the background. Current backlog counts are as follows. | CJIS Reported volume | CJIS | OBIM, e3 | Delayed responses to biometrics transactions | N/A | CJIS | N/A | e3 was always available | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 FPQ | 2/11/18 11:25 | 2/11/18 15:15 | 3:50 | #511136 - OBIM | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly.
Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | | OBP;#OFO;#OFO/SIGMA | | | 2/11/18 13:05 | OBIM, 1:05pm by Email, 1:12pm by Phone | #9843196 | Yes | N/A | 11:28am OBIM notices delayed responses coming from CJIS. Transactions have their JTID’s so the problem is with CJIS
1:05pm OBIM notifies e3 by email
1:12pm OBIM notifies e3 by phone
1:35pm Situational Awareness sent Bridge call information included
1:46 pm no known issues with CJIS after speaking with their watch Commander, 21 transactions over SLA at the moment. Going to call again if no one from CJIS joins bridge or gives OBIM an update after 20 minutes
2:06pm Bridge continues with backlog slightly dropping from high of 51 total
2:15pm OBIM reached out to CJIS watch desk & discovered that there was a ticket opened with their help desk but no one is actively working it.
2:29pm Leslie Donovan from CJIS is going to join bridge call
2:36pm Brandon Joins Bridge Call
2:38pm Counts are Total 15 in backlog, 7 over SLA
2:40pm Maryanne Duffy joins Bridge
2:50pm #511136 OBIM Ticket number, CBP ticket number #9843196
2:57pm Total overall backlog count = 12, 8 over SLA
3:00pm Leslie Donovan is running behind
3:13pm Counts are: Total = 9,
3:14pm Leslie Boyer from CJIS had come in and started working on remaining backlog in background unbeknownst to OBIM.
3:17pm OBIM confirmed they were seeing transactions processed in real time. Bridge call to shut down after sending over SLA TID’s to CJIS to be manually worked off.
3:17pm CJIS has resolved issue by manually working backlog off. | | | CJIS Delayed Responses Affecting e3 Biometrics & FPQ2 | NGI / CJIS | NGI/CJIS | Resolved: As of 3:17PM CJIS engineers resolved issue by manually working off backlog in the background. Current backlog counts are as follows. | CJIS Reported volume | CJIS | OBIM, e3 | Delayed responses to biometrics transactions | N/A | CJIS | N/A | e3 was always available | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 2/10/18 03:00 | 2/11/18 08:15 | 5:15 | N/A | | Purpose:
ICE will be performing maintenance on the EID database.
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
| e3 Biometrics | OBP;#OFO | | | 2/10/18 06:00 | 9832292 | 9832292 | No | N/A | N/A | | | ICE/EID Maintenance Canceled | | ICE/EID | Upon receving notification from ICE that the maintence was canceled e3 support engaged EWS and requested to remove the site down page and recycle services. | EAGLE experienced an issue. The issue could not be resolved prior to the scheduled EID Oracle upgrade. ERO raised concerns and requested the EAGLE issues to be resolved and therefore the Oracle Upgrade was cancelled. ICE continued to work the issue with the application seeking a resolution until approximately 4:15, at that point restoration of the application had to take precedence over the EID upgrade. | ICE | Ice, EDME, Duty officer, e3 Managment |
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
| N/A | EDME | N/A | Application was down due to site down page being implemented | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 2/10/18 03:00 | 2/11/18 08:15 | 5:15 | N/A | | Purpose:
ICE will be performing maintenance on the EID database.
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
| e3 Biometrics | OBP;#OFO | | | 2/10/18 06:00 | 9832292 | 9832292 | No | N/A | N/A | | | ICE/EID Maintenance Canceled | | ICE/EID | Upon receving notification from ICE that the maintence was canceled e3 support engaged EWS and requested to remove the site down page and recycle services. | EAGLE experienced an issue. The issue could not be resolved prior to the scheduled EID Oracle upgrade. ERO raised concerns and requested the EAGLE issues to be resolved and therefore the Oracle Upgrade was cancelled. ICE continued to work the issue with the application seeking a resolution until approximately 4:15, at that point restoration of the application had to take precedence over the EID upgrade. | ICE | Ice, EDME, Duty officer, e3 Managment |
Impact:
All e3 modules (App Log; e3 Biometrics; e3 Detentions; e3 OASISS; e3 Processing; e3 Prosecutions; e3 Intake; and FPQ2) will be unavailable during this time.
| N/A | EDME | N/A | Application was down due to site down page being implemented | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 2/5/18 10:25 | 2/5/18 18:10 | 7:45 | N/A | | E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. | | OBP;#OFO;#OFO/SIGMA | | | 2/5/18 13:05 | OBIM (email) | 9814574 | Yes | N/A | 10:25 AM: OBIM PAS notices slowness on CJIS responses but there are no TID’s over SLA at this time. They continue to monitor.
12:50 PM: OBIM service desk notified by OBIM PAS, 8 transactions over SLA at this time, 32 total in backlog
1:04 PM: e3 Support notified by phone from OBIM on CJIS slowness issue
1:06 PM: e3 Notified by email of CJIS issue, Bridge call spun up
1:15 PM: OBIM sent a list of backlogged transaction to starting getting them pushed through
1:31 PM: Denice at CJIS stated that there are no issues on the CJIS side but Lacey Smith explained to her about the booking backlog & the over SLA transactions. She decided to investigate some more.
1:39 PM: Denice stated to Lacey that they have no issues, just volume & they are working off the TPRS first then tackling the bookings .
1:50 PM: E3 support is currently see a backlog of 38 transactions 13 of which are over SLA,.
1:57 PM: MaryAnn Duffy Joined the bridge call
2:18 PM: CJIS has pushed through ice eagle transaction but have not pushed through e3 transactions.
3:26 PM: John Basest joined the bridge call
3:45 PM: OBIM will be reaching out to Denise with CJIS for an update.
4:32 PM: Maurice has confirmed the he has escalated this reoccurring issue to upper management.
4:37 PM: CJIS has confirmed that they have cleared there current que. OBIM is putting together another list of transaction that need to be worked through.
4:40 PM: New list of transactions over SLA sent to OBIM to get JTIDS & send to CJIS .
5:00 PM: CJIS confirmed that they received new list of transactions to work off.
5:27 PM: Significant drop noted in backlog report from 33 to 15 in the last 30 minutes.
5:59 PM: Next significant drop in backlog report: from 15 to 5 in the last 30 minutes with only one transaction over SLA. OBIM is sending it over to CJIS to be worked off.
6:12PM: e3 Backlog is down to 7 with no transactions over SLA. OBIM confirmed that transactions are processing in real time, e3 is dropping from the bridge call. | | | Unplanned Outage CJIS Affecting e3 Biometrics | CBP OFO | NGI/CJIS | CJIS has manually processed the e3 backlog to where they are now processing transactions in real time, as confirmed by OBIM. | CJIS Engineers confirm that there are no issues on their side, just volume. | CJIS | e3 Support, OBIM, FBI (CJIS) | e3 Biometrics was available during this outage, and users were able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | Unable to process in real time. | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 2/5/18 10:25 | 2/5/18 18:10 | 7:45 | N/A | | E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. | | OBP;#OFO;#OFO/SIGMA | | | 2/5/18 13:05 | OBIM (email) | 9814574 | Yes | N/A | 10:25 AM: OBIM PAS notices slowness on CJIS responses but there are no TID’s over SLA at this time. They continue to monitor.
12:50 PM: OBIM service desk notified by OBIM PAS, 8 transactions over SLA at this time, 32 total in backlog
1:04 PM: e3 Support notified by phone from OBIM on CJIS slowness issue
1:06 PM: e3 Notified by email of CJIS issue, Bridge call spun up
1:15 PM: OBIM sent a list of backlogged transaction to starting getting them pushed through
1:31 PM: Denice at CJIS stated that there are no issues on the CJIS side but Lacey Smith explained to her about the booking backlog & the over SLA transactions. She decided to investigate some more.
1:39 PM: Denice stated to Lacey that they have no issues, just volume & they are working off the TPRS first then tackling the bookings .
1:50 PM: E3 support is currently see a backlog of 38 transactions 13 of which are over SLA,.
1:57 PM: MaryAnn Duffy Joined the bridge call
2:18 PM: CJIS has pushed through ice eagle transaction but have not pushed through e3 transactions.
3:26 PM: John Basest joined the bridge call
3:45 PM: OBIM will be reaching out to Denise with CJIS for an update.
4:32 PM: Maurice has confirmed the he has escalated this reoccurring issue to upper management.
4:37 PM: CJIS has confirmed that they have cleared there current que. OBIM is putting together another list of transaction that need to be worked through.
4:40 PM: New list of transactions over SLA sent to OBIM to get JTIDS & send to CJIS .
5:00 PM: CJIS confirmed that they received new list of transactions to work off.
5:27 PM: Significant drop noted in backlog report from 33 to 15 in the last 30 minutes.
5:59 PM: Next significant drop in backlog report: from 15 to 5 in the last 30 minutes with only one transaction over SLA. OBIM is sending it over to CJIS to be worked off.
6:12PM: e3 Backlog is down to 7 with no transactions over SLA. OBIM confirmed that transactions are processing in real time, e3 is dropping from the bridge call. | | | Unplanned Outage CJIS Affecting e3 Biometrics | CBP OFO | NGI/CJIS | CJIS has manually processed the e3 backlog to where they are now processing transactions in real time, as confirmed by OBIM. | CJIS Engineers confirm that there are no issues on their side, just volume. | CJIS | e3 Support, OBIM, FBI (CJIS) | e3 Biometrics was available during this outage, and users were able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | Unable to process in real time. | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 2/5/18 10:25 | 2/5/18 18:10 | 7:45 | N/A | | E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. | | OBP;#OFO;#OFO/SIGMA | | | 2/5/18 13:05 | OBIM (email) | 9814574 | Yes | N/A | 10:25 AM: OBIM PAS notices slowness on CJIS responses but there are no TID’s over SLA at this time. They continue to monitor.
12:50 PM: OBIM service desk notified by OBIM PAS, 8 transactions over SLA at this time, 32 total in backlog
1:04 PM: e3 Support notified by phone from OBIM on CJIS slowness issue
1:06 PM: e3 Notified by email of CJIS issue, Bridge call spun up
1:15 PM: OBIM sent a list of backlogged transaction to starting getting them pushed through
1:31 PM: Denice at CJIS stated that there are no issues on the CJIS side but Lacey Smith explained to her about the booking backlog & the over SLA transactions. She decided to investigate some more.
1:39 PM: Denice stated to Lacey that they have no issues, just volume & they are working off the TPRS first then tackling the bookings .
1:50 PM: E3 support is currently see a backlog of 38 transactions 13 of which are over SLA,.
1:57 PM: MaryAnn Duffy Joined the bridge call
2:18 PM: CJIS has pushed through ice eagle transaction but have not pushed through e3 transactions.
3:26 PM: John Basest joined the bridge call
3:45 PM: OBIM will be reaching out to Denise with CJIS for an update.
4:32 PM: Maurice has confirmed the he has escalated this reoccurring issue to upper management.
4:37 PM: CJIS has confirmed that they have cleared there current que. OBIM is putting together another list of transaction that need to be worked through.
4:40 PM: New list of transactions over SLA sent to OBIM to get JTIDS & send to CJIS .
5:00 PM: CJIS confirmed that they received new list of transactions to work off.
5:27 PM: Significant drop noted in backlog report from 33 to 15 in the last 30 minutes.
5:59 PM: Next significant drop in backlog report: from 15 to 5 in the last 30 minutes with only one transaction over SLA. OBIM is sending it over to CJIS to be worked off.
6:12PM: e3 Backlog is down to 7 with no transactions over SLA. OBIM confirmed that transactions are processing in real time, e3 is dropping from the bridge call. | | | Unplanned Outage CJIS Affecting e3 Biometrics | CBP OFO | NGI/CJIS | CJIS has manually processed the e3 backlog to where they are now processing transactions in real time, as confirmed by OBIM. | CJIS Engineers confirm that there are no issues on their side, just volume. | CJIS | e3 Support, OBIM, FBI (CJIS) | e3 Biometrics was available during this outage, and users were able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | Unable to process in real time. | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 2/4/18 08:35 | 2/4/18 16:20 | 7:45 | N/A | |
Incident Description and Impact Statement: At 8:35 AM upon checking the backlog of IAFIS transactions e3 support noticed a higher than normal count in the backlog. E3 provided a list of stuck transactions to IDENT. A list of JTIDs over SLA were sent to CJIS for review. At 10:03 AM E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly. | | OBP;#OFO | | | 2/4/18 08:35 | e3 support | 9811477 | Yes | N/A | Incident Description and Impact Statement: At 8:35 AM upon checking the backlog of IAFIS transactions e3 support noticed a higher than normal count in the backlog. E3 provided a list of stuck transactions to IDENT. A list of JTIDs over SLA were sent to CJIS for review. At 10:03 AM E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly.
Update 2: CJIS engineer has not confirmed if the issue is on their end. CJIS engineer still continue to investigate.E3 Support will continue monitoring the backlog and will provide more update once it becomes available. Bridge continue
Update 3: CJIS continues to investigate issues on their system that were delaying responses for BKG transaction .(BKG) booking transaction continue to slowly decline. E3 Support will continue monitoring the backlog to ensure it reaches normal levels.
Resolved: As of 4:19 PM OBIM has confirmed transactions are now processing in real time. The backlog has drained with the exception of a few transactions over the SLA which will be forwarded to CJIS for manual processing. | | | Situational Awareness: (CJIS) Situational awareness affecting e3 biometrics | CJIS | NGI/CJIS | Resolved: As of 4:19 PM OBIM has confirmed transactions are now processing in real time. The backlog has drained with the exception of a few transactions over the SLA which will be forwarded to CJIS for manual processing. | None Given | | OBIM,CJIS |
Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 2/4/18 08:35 | 2/4/18 16:20 | 7:45 | N/A | |
Incident Description and Impact Statement: At 8:35 AM upon checking the backlog of IAFIS transactions e3 support noticed a higher than normal count in the backlog. E3 provided a list of stuck transactions to IDENT. A list of JTIDs over SLA were sent to CJIS for review. At 10:03 AM E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly. | | OBP;#OFO | | | 2/4/18 08:35 | e3 support | 9811477 | Yes | N/A | Incident Description and Impact Statement: At 8:35 AM upon checking the backlog of IAFIS transactions e3 support noticed a higher than normal count in the backlog. E3 provided a list of stuck transactions to IDENT. A list of JTIDs over SLA were sent to CJIS for review. At 10:03 AM E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly.
Update 2: CJIS engineer has not confirmed if the issue is on their end. CJIS engineer still continue to investigate.E3 Support will continue monitoring the backlog and will provide more update once it becomes available. Bridge continue
Update 3: CJIS continues to investigate issues on their system that were delaying responses for BKG transaction .(BKG) booking transaction continue to slowly decline. E3 Support will continue monitoring the backlog to ensure it reaches normal levels.
Resolved: As of 4:19 PM OBIM has confirmed transactions are now processing in real time. The backlog has drained with the exception of a few transactions over the SLA which will be forwarded to CJIS for manual processing. | | | Situational Awareness: (CJIS) Situational awareness affecting e3 biometrics | CJIS | NGI/CJIS | Resolved: As of 4:19 PM OBIM has confirmed transactions are now processing in real time. The backlog has drained with the exception of a few transactions over the SLA which will be forwarded to CJIS for manual processing. | None Given | | OBIM,CJIS |
Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 Biometrics | 2/1/18 19:30 | 2/1/18 22:35 | 3:05 | | | E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the Department of Justice (DOJ) Division, for Joint Automated Booking System(JABS) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come.
Upon initial notification that e3 Support received it indicated that issue was on (DOJ) JABS side. After further investigation by OBIM, it was observed that the issue was on ICE EID side. | | OBP;#OFO | | | 2/1/18 22:00 | CBP Duty Officer | 9802947 | Yes | | | | | Situational Awareness: ICE/EIDSituational awareness affecting e3 biometrics | ICE EID | ICE/EID | After further investigation by OBIM, it was observed that the issue was on ICE EID side. ICE Engineers have corrected the issue that occurred upon their Enforce Database (EID) Listener which impacted established connections from the OBIM System. The EID Listeners were bounced. OBIM has also bounced the processes for JABS on their end. The backlog of (TPRS) went from 276 to 2 transactions and (BKG) transactions went from 259 to 3 transactions. The backlog has completely drained and all transactions are processing in real time and none are over SLA. Bridge call ended at 10:36 PM. | ICE Engineers have corrected the issue that occurred upon their Enforce Database (EID) Listener which impacted established connections from the OBIM System. | ICE EID | OBIM/ICE EID | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | ICE EID | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Significant Issue | e3 FPQ | 2/1/18 19:30 | 2/1/18 22:35 | 3:05 | | | E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the Department of Justice (DOJ) Division, for Joint Automated Booking System(JABS) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come.
Upon initial notification that e3 Support received it indicated that issue was on (DOJ) JABS side. After further investigation by OBIM, it was observed that the issue was on ICE EID side. | | OBP;#OFO | | | 2/1/18 22:00 | CBP Duty Officer | 9802947 | Yes | | | | | Situational Awareness: ICE/EIDSituational awareness affecting e3 biometrics | ICE EID | ICE/EID | After further investigation by OBIM, it was observed that the issue was on ICE EID side. ICE Engineers have corrected the issue that occurred upon their Enforce Database (EID) Listener which impacted established connections from the OBIM System. The EID Listeners were bounced. OBIM has also bounced the processes for JABS on their end. The backlog of (TPRS) went from 276 to 2 transactions and (BKG) transactions went from 259 to 3 transactions. The backlog has completely drained and all transactions are processing in real time and none are over SLA. Bridge call ended at 10:36 PM. | ICE Engineers have corrected the issue that occurred upon their Enforce Database (EID) Listener which impacted established connections from the OBIM System. | ICE EID | OBIM/ICE EID | e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | ICE EID | N/A | N/A | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/31/18 13:45 | 1/31/18 16:40 | 2:55 | N/A | | At approx. 12:00 pm the automated backlog report for IAFIS transactions showed a 35 transactions in the backlog. At 12:40 PM the IAFIS backlog report displayed a total of 40 transactions in the backlog. During this time there were only 3 transaction over the SLA. E3 support sent a list of transaction that were stuck processing to OBIM to investigate. OBIM advised that CJIS has experienced high volumes, so Booking transaction backlogs are high. OBIM compiled a list of TCN’s that were over the SLA to send over to CJIS. CJIS confirmed there were no issues and advised they would look into the transactions. E3 support continued to monitor the backlog and at approx. 2:14 PM ET E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. | | OBP;#OFO;#OFO/SIGMA | | | 1/31/18 14:15 | OBIM | 9796182 | Yes | N/A | 1:40p OBIM noticed slowness & created event record.
2:14p OBIM notifies e3 Support Ticket created #9796182
2:40p Gerald updated e3, OBIM sent JTID’s over to CJIS to remediate. The overall counts are trending downward. 21 over SLA
2:46p Eddie from OBIM PAS joins call
2:51p OBIM asked Eddie to reach out to Denice about the JTID’s sent over earlier.
3:02p Backlog is 43, 16 over SLA
3:10p Robb Duty Watch Manager rejoins call, Still unable to reach CJIS (team is in a meeting)
3:32p Downward trend continues, 30 total, 15 over SLA
3:37p Still no answer from CJIS
3:38p Maurice Sims Joined
3:39p Backlog 27 total, 10 over SLA
3:41p Gene leaves bridge
3:52p Backlog 19 total, 9 over SLA
3:53p John Bassett joins bridge replacing Mike Shehata, Lacey Smith.
4:02p OBIM is satisfied where the count is for them (30). E3 is hovering at 17 total & 7 over SLA
4:10p Curtis from PAS 2nd shift joined call. Backlog 14 total, 5 over SLA
4:22p OBIM dropping call from Severity 2 to a Severity 3 status as e3 waits for movement on last 3 transactions over SLA.
4:31p OBIM States that CJIS never confirmed any issue other than volume on their part
4:39p Closing down bridge call. The 2 over SLA have been sent to CJIS for remediation. | | | (CJIS) Situational awareness affecting e3 biometrics | OFO, USBP | NGI/CJIS | Resolved: As of 4:39pm 1/31/2018 Office of Biometric Identity Management (OBIM) reported that CJIS was operating & responding in real time to booking request and that the booking backlog had dropped to 5. E3 Support will monitor for the next several hours | N/A | CJIS | OBIM, CJIS | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was availabe, IAFIS transactions were delayed | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/31/18 13:45 | 1/31/18 16:40 | 2:55 | N/A | | At approx. 12:00 pm the automated backlog report for IAFIS transactions showed a 35 transactions in the backlog. At 12:40 PM the IAFIS backlog report displayed a total of 40 transactions in the backlog. During this time there were only 3 transaction over the SLA. E3 support sent a list of transaction that were stuck processing to OBIM to investigate. OBIM advised that CJIS has experienced high volumes, so Booking transaction backlogs are high. OBIM compiled a list of TCN’s that were over the SLA to send over to CJIS. CJIS confirmed there were no issues and advised they would look into the transactions. E3 support continued to monitor the backlog and at approx. 2:14 PM ET E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. | | OBP;#OFO;#OFO/SIGMA | | | 1/31/18 14:15 | OBIM | 9796182 | Yes | N/A | 1:40p OBIM noticed slowness & created event record.
2:14p OBIM notifies e3 Support Ticket created #9796182
2:40p Gerald updated e3, OBIM sent JTID’s over to CJIS to remediate. The overall counts are trending downward. 21 over SLA
2:46p Eddie from OBIM PAS joins call
2:51p OBIM asked Eddie to reach out to Denice about the JTID’s sent over earlier.
3:02p Backlog is 43, 16 over SLA
3:10p Robb Duty Watch Manager rejoins call, Still unable to reach CJIS (team is in a meeting)
3:32p Downward trend continues, 30 total, 15 over SLA
3:37p Still no answer from CJIS
3:38p Maurice Sims Joined
3:39p Backlog 27 total, 10 over SLA
3:41p Gene leaves bridge
3:52p Backlog 19 total, 9 over SLA
3:53p John Bassett joins bridge replacing Mike Shehata, Lacey Smith.
4:02p OBIM is satisfied where the count is for them (30). E3 is hovering at 17 total & 7 over SLA
4:10p Curtis from PAS 2nd shift joined call. Backlog 14 total, 5 over SLA
4:22p OBIM dropping call from Severity 2 to a Severity 3 status as e3 waits for movement on last 3 transactions over SLA.
4:31p OBIM States that CJIS never confirmed any issue other than volume on their part
4:39p Closing down bridge call. The 2 over SLA have been sent to CJIS for remediation. | | | (CJIS) Situational awareness affecting e3 biometrics | OFO, USBP | NGI/CJIS | Resolved: As of 4:39pm 1/31/2018 Office of Biometric Identity Management (OBIM) reported that CJIS was operating & responding in real time to booking request and that the booking backlog had dropped to 5. E3 Support will monitor for the next several hours | N/A | CJIS | OBIM, CJIS | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was availabe, IAFIS transactions were delayed | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/29/18 02:10 | 1/29/18 15:10 | 13:00 | N/A | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the Department of Justice (DOJ) Division, for Joint Automated Booking System(JABS) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly. | | OBP;#OFO;#OFO/SIGMA | | | 1/29/18 07:35 | OBIM | 9777423 | Yes | N/A | 8:13 AM e3 support (Tigist and Nielab) Joined the Bridge call
8:24 am JABS engineer confirmed they had restart their server
8:25 am OBIM confirmed current Queue count BKG: 278, TPRS: 1
8:30am JABS engineer continue to investigate the issue
8:48 am Sam from JABS confirmed the slowness on transaction start at 1:32 am
9:12 am Nathan Confirmed they still not getting booking Transaction
9:18 am Nilab ask about the corruption files from what station and if it is from window 10 are the files are corrupted?
9:22 am JABS engineer confirm 300 bkg they are not processing and will investigate if iwindow 10 is the issue
9:30 am Natahn(JABS) confirm 27 transaction all from e3 are corrupted
9:34 am Brandon joined the Bridge call
9:53 no calls to e3 over issue. OBIM found corruption coming from transactions started on Windows 7 workstations and being finished on Windows 10 workstations.
9:55 Nathan at OBIM found 2 corrupt transactions, Going to send the TID’s to e3 support.
10:03
10:05 Mike Decker from JABS joins bridge. Found corrupt binary data in the transactions. Received corrupt TID’s & e3 is investigating. Also on from JABS Brandt.
10:10 The transactions aren’t passing validation checks.
10:14 JABS inquires about any transactions returning from JABS & only ICE Eagle TID are returning. The failed files have IRIS photos involved.
10:19 Nathan is going to parse the file differences between the CBP Tid’s & the ICE Eagle Tid’s.
10:22 Nathan has found that there have been several returns from JABS every few minutes.
10:23 Mike has definitive proof that the corruption is caused by the IRIS PNG issue but nothing has been changed on the e3 side from last night.
10:28 19 Transactions returned at 9:40 this morning.
10:29 Rachelle Henderson asks “what do we do to correct this issue?”
10:33 Jose confirms there are / were no new Windows 10 deployments over the weekend & suggest rolling reboot of all e3 Application servers.
10:37 Rachelle leaves bridge, OBIM issue from 0024 to 0620 could / would not affect JABS issue going on now. OBIM investigating if a major system was rebooting last night. Rolling restart of e3 application servers started.
10:44 OBIM asks for e3 to have a subject re-submitted so they could track transaction
10:49 OBIM re-confirmed that new subject transactions have been returning since 9:40am.
11:00 Jose Villafane confirmed that he found a terminal that had submitted one of the corrupted TID’s (McAllen, WS 710) had just sent & received a JTID in the last 5 minutes.
11:02 EWS joins call & is bringing on engineer to recycle servers.
11:05 e3 sends corrupt TID’s to OBIM to have them set to “Error” so the transaction (subjects) could be re-submitted.
11:08 JABS admitted to going down around 1:30am & came back up 7:00am. E3 asks why isn’t this being considered as the cause of the corruption.
11:15 Lacy Confirms that booking are still processing (new ones) in real time.
11:16 EWS in on the call & starting the rolling restart.
11:26 Server restart completed. E3 calling sites to have subjects resubmitted.
11:57 resubmitting of transaction being done now if JABS re-allows submission.
12:12 OBIM received permission to resubmit 20 TID’s at a time to JABS
12:16 Prioritizing RGV transactions first.
12:25 e3 Support is seeing recently submitted originally corrupt transactions come back.
12:29 e3 want OBIM to prioritize Weslaco Then all oldest transactions.
12:49 JABS confirmed that the JABS gateway was up the whole time.
12:53 Discussion of which system is at fault for outage.
1:04p Backlog down to 122 in queue and is only affecting e3.
1:20p Backlog count continuing its downward trend to 66 transactions.
1:36p Count is now 52 total, 39 over SLA
1:50p Count is now 41 total with 30 over SLA.
2:00p Count is now 29 with 23 over SLA
2:15p Count Is Now 27 with 21 over SLA
3:00p Count is now 7, with 6 over SLA
3:02p OBIM is lowering the severity down to 2
3:12p Bridge is shutting down now. | | | (JABS) Situational awareness affecting e3 biometrics | USBP OFO | JABS | Resolved: JABS has manually processed the reminder of the (BKG) transactions. E3 has confirmed and the backlog has drained down to 6 transactions. All booking transactions are processing in real time. There are total of 7 booking transactions that were corrupted and it was set to error by OBIM. E3 Support is reaching out to Brackettville, RGV, and Yuma and have them resubmit these transactions. The root cause of the issue and why the transactions were corrupted has not been identified.
Status Update #/Resolution: Resolved
Incident Start Date/Time: 1/29/2018 2:10 AM
Incident End Date/ Time: 1/29/2018 3:14 PM
E3 Support Notified by: OBIM at 7:37 AM via email
Number of calls received: 0
Number of emails received: 0
Number of tickets related in Remedy: 2
CBP Ticket# 9777423 | N/A | JABS | OBIM PAS Team, CJIS Watch Commander, JABS engineers, EDME | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | JABS | N/A | e3 Biometrics was available, Transactions were not processing | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/29/18 02:10 | 1/29/18 15:10 | 13:00 | N/A | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the Department of Justice (DOJ) Division, for Joint Automated Booking System(JABS) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly. | | OBP;#OFO;#OFO/SIGMA | | | 1/29/18 07:35 | OBIM | 9777423 | Yes | N/A | 8:13 AM e3 support (Tigist and Nielab) Joined the Bridge call
8:24 am JABS engineer confirmed they had restart their server
8:25 am OBIM confirmed current Queue count BKG: 278, TPRS: 1
8:30am JABS engineer continue to investigate the issue
8:48 am Sam from JABS confirmed the slowness on transaction start at 1:32 am
9:12 am Nathan Confirmed they still not getting booking Transaction
9:18 am Nilab ask about the corruption files from what station and if it is from window 10 are the files are corrupted?
9:22 am JABS engineer confirm 300 bkg they are not processing and will investigate if iwindow 10 is the issue
9:30 am Natahn(JABS) confirm 27 transaction all from e3 are corrupted
9:34 am Brandon joined the Bridge call
9:53 no calls to e3 over issue. OBIM found corruption coming from transactions started on Windows 7 workstations and being finished on Windows 10 workstations.
9:55 Nathan at OBIM found 2 corrupt transactions, Going to send the TID’s to e3 support.
10:03
10:05 Mike Decker from JABS joins bridge. Found corrupt binary data in the transactions. Received corrupt TID’s & e3 is investigating. Also on from JABS Brandt.
10:10 The transactions aren’t passing validation checks.
10:14 JABS inquires about any transactions returning from JABS & only ICE Eagle TID are returning. The failed files have IRIS photos involved.
10:19 Nathan is going to parse the file differences between the CBP Tid’s & the ICE Eagle Tid’s.
10:22 Nathan has found that there have been several returns from JABS every few minutes.
10:23 Mike has definitive proof that the corruption is caused by the IRIS PNG issue but nothing has been changed on the e3 side from last night.
10:28 19 Transactions returned at 9:40 this morning.
10:29 Rachelle Henderson asks “what do we do to correct this issue?”
10:33 Jose confirms there are / were no new Windows 10 deployments over the weekend & suggest rolling reboot of all e3 Application servers.
10:37 Rachelle leaves bridge, OBIM issue from 0024 to 0620 could / would not affect JABS issue going on now. OBIM investigating if a major system was rebooting last night. Rolling restart of e3 application servers started.
10:44 OBIM asks for e3 to have a subject re-submitted so they could track transaction
10:49 OBIM re-confirmed that new subject transactions have been returning since 9:40am.
11:00 Jose Villafane confirmed that he found a terminal that had submitted one of the corrupted TID’s (McAllen, WS 710) had just sent & received a JTID in the last 5 minutes.
11:02 EWS joins call & is bringing on engineer to recycle servers.
11:05 e3 sends corrupt TID’s to OBIM to have them set to “Error” so the transaction (subjects) could be re-submitted.
11:08 JABS admitted to going down around 1:30am & came back up 7:00am. E3 asks why isn’t this being considered as the cause of the corruption.
11:15 Lacy Confirms that booking are still processing (new ones) in real time.
11:16 EWS in on the call & starting the rolling restart.
11:26 Server restart completed. E3 calling sites to have subjects resubmitted.
11:57 resubmitting of transaction being done now if JABS re-allows submission.
12:12 OBIM received permission to resubmit 20 TID’s at a time to JABS
12:16 Prioritizing RGV transactions first.
12:25 e3 Support is seeing recently submitted originally corrupt transactions come back.
12:29 e3 want OBIM to prioritize Weslaco Then all oldest transactions.
12:49 JABS confirmed that the JABS gateway was up the whole time.
12:53 Discussion of which system is at fault for outage.
1:04p Backlog down to 122 in queue and is only affecting e3.
1:20p Backlog count continuing its downward trend to 66 transactions.
1:36p Count is now 52 total, 39 over SLA
1:50p Count is now 41 total with 30 over SLA.
2:00p Count is now 29 with 23 over SLA
2:15p Count Is Now 27 with 21 over SLA
3:00p Count is now 7, with 6 over SLA
3:02p OBIM is lowering the severity down to 2
3:12p Bridge is shutting down now. | | | (JABS) Situational awareness affecting e3 biometrics | USBP OFO | JABS | Resolved: JABS has manually processed the reminder of the (BKG) transactions. E3 has confirmed and the backlog has drained down to 6 transactions. All booking transactions are processing in real time. There are total of 7 booking transactions that were corrupted and it was set to error by OBIM. E3 Support is reaching out to Brackettville, RGV, and Yuma and have them resubmit these transactions. The root cause of the issue and why the transactions were corrupted has not been identified.
Status Update #/Resolution: Resolved
Incident Start Date/Time: 1/29/2018 2:10 AM
Incident End Date/ Time: 1/29/2018 3:14 PM
E3 Support Notified by: OBIM at 7:37 AM via email
Number of calls received: 0
Number of emails received: 0
Number of tickets related in Remedy: 2
CBP Ticket# 9777423 | N/A | JABS | OBIM PAS Team, CJIS Watch Commander, JABS engineers, EDME | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | JABS | N/A | e3 Biometrics was available, Transactions were not processing | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/22/18 11:15 | 1/22/18 18:35 | 7:20 | N/A | | • CJIS Incident Description and Impact Statement: E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. OBIM established a bridge call to contact CJIS representatives. After many attempts OBIM received a response from CJIS contact Garrett who confirmed that they were experiencing no issues on their side but the backlog was due to “High Volume”. By the time bookings reached 122 backlogged additional SA’s were called in by CJIS to mitigate the backlog manually. OBIM provided a few TCN numbers to assist them with locating our backlog of transactions. | | OBP;#OFO;#OFO/SIGMA | | | 1/22/18 12:05 | OBIM | 9745148 | Yes | N/A | 1:25 PM – I Tigist Arefaynea joined the Bridge Call
1:27 pm - Garret from CJIS states there are no issues on their side just high volume.
1:30 pm - Brandon Long was requested be duty officer, if he can escalated to the upper management and provided brief update of the situation.
1:40 pm - Robert Gould from CBP joined the Bridge call
1:45 pm - Robert Gould asked to e3 support Brandon Long the report for CIJS issue
2:00pm - bridge has been stood up to monitor the issue
2:06 pm - OBIM Engineer confirmed the transaction prior still not been process
2:20 pm – Backlog count is staying steady at this time
2:30 pm- We have 30 Transaction Over SLA and 54 BKG Backlog
3:00pm – Robert Gould confirmed he escalated to resolve the issue
3:15pm - Bridge Call has been stood up to monitor the issue
3:36 pm – William joined the bridge call
3:30 pm- Tigist Drop the bridge call and Terry Hall taking over
3:34 pm – Gary Kelly is being sent the TID’s to be manually manipulated and pushed through.
3:36 pm - CJIS Ticket #253930
3:45 pm – Maurice Sims rejoined & stated that Donna (from CJIS) is going to join & ask for TCN’s so she can research further
3:58 pm – Maurice Sims drops & John Bassett Joins bridge.
4:37 pm – Backlog has come way down from a high of 122 to 49 of which 28 are beyond SLA
5:26pm – Very little material change in the backlog count from 49 to 43, of which 24 are over SLA
6:00 pm – Backlog queue now down to 21 total with 10 over SLA. OBIM will keep monitoring until it’s below 10 | | | (CJIS) Situational awareness affecting e3 biometrics | USBP OFO | NGI/CJIS | Resolved: After many attempts OBIM received a response from CJIS contact Garrett who confirmed that they were experiencing no issues on their side but the backlog was due to "High Volume". By the time bookings reached 122 backlogged additional SA's were called in by CJIS to mitigate the backlog manually. OBIM provided a few TCN numbers to assist them with locating our backlog of transactions. CJIS later reconfirmed that their system was recovering from an issue that they had experienced on their NGI System that had delayed the delivery of their BKG responses back to OBIM. By 6:30pm the booking backlog dropped to e3 standards with no transactions over SLA & all transactions processing in real time. | TBD | CJIS | OBP PAS Team, CBP Duty Officers, CJIS Watch Commander | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was available during this time, IAFIS transactions were delayed in processing | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/22/18 11:15 | 1/22/18 18:35 | 7:20 | N/A | | • CJIS Incident Description and Impact Statement: E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. OBIM established a bridge call to contact CJIS representatives. After many attempts OBIM received a response from CJIS contact Garrett who confirmed that they were experiencing no issues on their side but the backlog was due to “High Volume”. By the time bookings reached 122 backlogged additional SA’s were called in by CJIS to mitigate the backlog manually. OBIM provided a few TCN numbers to assist them with locating our backlog of transactions. | | OBP;#OFO;#OFO/SIGMA | | | 1/22/18 12:05 | OBIM | 9745148 | Yes | N/A | 1:25 PM – I Tigist Arefaynea joined the Bridge Call
1:27 pm - Garret from CJIS states there are no issues on their side just high volume.
1:30 pm - Brandon Long was requested be duty officer, if he can escalated to the upper management and provided brief update of the situation.
1:40 pm - Robert Gould from CBP joined the Bridge call
1:45 pm - Robert Gould asked to e3 support Brandon Long the report for CIJS issue
2:00pm - bridge has been stood up to monitor the issue
2:06 pm - OBIM Engineer confirmed the transaction prior still not been process
2:20 pm – Backlog count is staying steady at this time
2:30 pm- We have 30 Transaction Over SLA and 54 BKG Backlog
3:00pm – Robert Gould confirmed he escalated to resolve the issue
3:15pm - Bridge Call has been stood up to monitor the issue
3:36 pm – William joined the bridge call
3:30 pm- Tigist Drop the bridge call and Terry Hall taking over
3:34 pm – Gary Kelly is being sent the TID’s to be manually manipulated and pushed through.
3:36 pm - CJIS Ticket #253930
3:45 pm – Maurice Sims rejoined & stated that Donna (from CJIS) is going to join & ask for TCN’s so she can research further
3:58 pm – Maurice Sims drops & John Bassett Joins bridge.
4:37 pm – Backlog has come way down from a high of 122 to 49 of which 28 are beyond SLA
5:26pm – Very little material change in the backlog count from 49 to 43, of which 24 are over SLA
6:00 pm – Backlog queue now down to 21 total with 10 over SLA. OBIM will keep monitoring until it’s below 10 | | | (CJIS) Situational awareness affecting e3 biometrics | USBP OFO | NGI/CJIS | Resolved: After many attempts OBIM received a response from CJIS contact Garrett who confirmed that they were experiencing no issues on their side but the backlog was due to "High Volume". By the time bookings reached 122 backlogged additional SA's were called in by CJIS to mitigate the backlog manually. OBIM provided a few TCN numbers to assist them with locating our backlog of transactions. CJIS later reconfirmed that their system was recovering from an issue that they had experienced on their NGI System that had delayed the delivery of their BKG responses back to OBIM. By 6:30pm the booking backlog dropped to e3 standards with no transactions over SLA & all transactions processing in real time. | TBD | CJIS | OBP PAS Team, CBP Duty Officers, CJIS Watch Commander | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was available during this time, IAFIS transactions were delayed in processing | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/21/18 07:00 | 1/21/18 16:05 | 9:05 | N/A | | 1/21/2018 CJIS Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. OBIM established a bridge call to contact CJIS representatives. CJIS Watch Commander confirmed engineers were investigate the issue. At 12:50 PM OBIM provide CJIS with a list of NGI TCN’s to locate the transactions being impacted. E3 confirmed transactions are now processing in real time and at 4:02 PM the backlog of transactions cleared with no intervention from OBIM to correct the issue. | | OBP;#OFO;#OFO/SIGMA | | | 1/21/18 11:35 | OBIM | 9743150 | Yes | N/A | 11:53 AM: e3 support has joined the bridge call. OBIM has engaged CJIS Watch Commander who has informed that they are contacting their engineers to investigate their NGI System.
1:35 PM: Bridge call continues, CJIS has reported that they have an SA on site working to resolve the issue.
2:10 PM: CJIS has stated that they will be reaching out to SA for a status update.
2:31 PM: OBIM is reaching back out to the watch commander to get a status and to see if a representative from the SA can join the bridge.
3:47 PM: OBIM has been unable to get anyone from CJIS to join current bridge call for an update. In the last hour the backlog has dropped considerably from a high of 122 down to 43. OBIM continues to monitor situation.
3:44 PM: e3 has observed backlog numbers continue to drop. E3 currently has 15 Backlogged transactions.
3:50 PM: e3 support is down to 10 transactions backlogged.
3:55 PM: Leslie Donovan reached out to CJIS Watch Commander to join the bridge call in session. only 1 person is on duty and may/may not be able to join due to work load. CJIS did state that they’re transactions are all good.
4:02 PM: No action was taken by OBIM to correct the issue. It is assumed that CJIS SA's assisted with working off the bookings backlog from the exception queue. E3 is now processing in real time. Bridge call closed. | | | CJIS situational Awareness Impacting e3 Biometrics | CBP OFO | NGI/CJIS | Resolved:No action was taken by OBIM to correct the issue. It is assumed that CJIS SA's assisted with working off the bookings backlog from the exception queue. E3 is now processing in real time. Bridge call closed 16:02 | CJIS did not provide | CJIS | | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was available, IAFIS transactions were not processing | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/21/18 07:00 | 1/21/18 16:05 | 9:05 | N/A | | 1/21/2018 CJIS Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. OBIM established a bridge call to contact CJIS representatives. CJIS Watch Commander confirmed engineers were investigate the issue. At 12:50 PM OBIM provide CJIS with a list of NGI TCN’s to locate the transactions being impacted. E3 confirmed transactions are now processing in real time and at 4:02 PM the backlog of transactions cleared with no intervention from OBIM to correct the issue. | | OBP;#OFO;#OFO/SIGMA | | | 1/21/18 11:35 | OBIM | 9743150 | Yes | N/A | 11:53 AM: e3 support has joined the bridge call. OBIM has engaged CJIS Watch Commander who has informed that they are contacting their engineers to investigate their NGI System.
1:35 PM: Bridge call continues, CJIS has reported that they have an SA on site working to resolve the issue.
2:10 PM: CJIS has stated that they will be reaching out to SA for a status update.
2:31 PM: OBIM is reaching back out to the watch commander to get a status and to see if a representative from the SA can join the bridge.
3:47 PM: OBIM has been unable to get anyone from CJIS to join current bridge call for an update. In the last hour the backlog has dropped considerably from a high of 122 down to 43. OBIM continues to monitor situation.
3:44 PM: e3 has observed backlog numbers continue to drop. E3 currently has 15 Backlogged transactions.
3:50 PM: e3 support is down to 10 transactions backlogged.
3:55 PM: Leslie Donovan reached out to CJIS Watch Commander to join the bridge call in session. only 1 person is on duty and may/may not be able to join due to work load. CJIS did state that they’re transactions are all good.
4:02 PM: No action was taken by OBIM to correct the issue. It is assumed that CJIS SA's assisted with working off the bookings backlog from the exception queue. E3 is now processing in real time. Bridge call closed. | | | CJIS situational Awareness Impacting e3 Biometrics | CBP OFO | NGI/CJIS | Resolved:No action was taken by OBIM to correct the issue. It is assumed that CJIS SA's assisted with working off the bookings backlog from the exception queue. E3 is now processing in real time. Bridge call closed 16:02 | CJIS did not provide | CJIS | | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was available, IAFIS transactions were not processing | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/19/18 13:05 | 1/19/18 08:50 | 4:15 | N/A | |
CJIS Incident Description and Impact Statement: E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. | | OBP;#OFO;#OFO/SIGMA | | | 1/19/18 14:25 | OBIM | 9739881 | Yes | N/A | 1:05 PM - Events Start
2:22 PM - e3 Support received notification from OBIM
2:43 PM - e3 Support members joined the bridge call with OBIM
2:51 PM - Brandon long ask if CJIS knows the issue, Duty Officer confirmed CJIS has acknowledged a problem
3:15 PM – Stated the numbers risen up within 30 min
3:23 PM – mike from service desk joined the bridge call
3:30 PM – e3 Support shift change.
3:55 PM – Jean Paul (second shift –Watch Officer) joins bridge call
3:58 PM – Vic spoke with Sherry at CJIS and CJIS states that they are resolved but are working off a huge backlog.
4:00 PM – John Bassett Joins taking over for Lacey Smith for the OCM.
4:15 PM – Backlog Report: 58 Booking of which 46 are over SLA. Oldest transaction is from 12:49pm.
4:22 PM - No movement up or down in transactions for 25 minutes.
4:33 PM - Maurice joins call
4:41 PM – Looks like the backlog is being processed in batches because there’s been no movement. OBIM putting in call to CJIS
4:53 PM – John Bassett spoke with CJIS and they admit to still having issues & in working off the backlog
5:04pm – OBIM needed authorization to restart a new router
5:10 PM – Current backlog still holding at 61 Booking with 49 Over SLA.
5:31 PM – OBIM is reaching back out to CJIS because backlog is holding, not increasing or decreasing.
5:35 PM – CJIS had to restart a process again.
6:07 PM – CJIS has 3 people working on the backlog situation.
7:00 PM: Bridge call continues, with no significant change in the backlog. OBIM is currently in the process of compiling a list to provide CJIS for manual processing.
8:00 PM: Bridge call continues, with no significant change in the backlog. CJIS continues to troubleshooting efforts to remediate the situation. Backlog of transactions have been to provide CJIS for manual processing.
8:55 PM: Resolved: The backlog has returned to normal after CJIS has fixed the issue on their end. We are now processing Booking transactions in real time with a total of 8 transactions in the backlog currently. None of these transactions are over SLA. | | | CJIS SItuational Awareness Impacting e3 Biometrics | CBP OFO | NGI/CJIS | Resolved: The backlog has returned to normal after CJIS has fixed the issue on their end.
CJIS did not provide a resolution or root cause | TBD | CJIS | OBIM, CBP Duty Officers, CJIS Watch Commander | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was availabe during this time, transaction responses were delayed | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/19/18 13:05 | 1/19/18 08:50 | 4:15 | N/A | |
CJIS Incident Description and Impact Statement: E3 Support received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) transactions. | | OBP;#OFO;#OFO/SIGMA | | | 1/19/18 14:25 | OBIM | 9739881 | Yes | N/A | 1:05 PM - Events Start
2:22 PM - e3 Support received notification from OBIM
2:43 PM - e3 Support members joined the bridge call with OBIM
2:51 PM - Brandon long ask if CJIS knows the issue, Duty Officer confirmed CJIS has acknowledged a problem
3:15 PM – Stated the numbers risen up within 30 min
3:23 PM – mike from service desk joined the bridge call
3:30 PM – e3 Support shift change.
3:55 PM – Jean Paul (second shift –Watch Officer) joins bridge call
3:58 PM – Vic spoke with Sherry at CJIS and CJIS states that they are resolved but are working off a huge backlog.
4:00 PM – John Bassett Joins taking over for Lacey Smith for the OCM.
4:15 PM – Backlog Report: 58 Booking of which 46 are over SLA. Oldest transaction is from 12:49pm.
4:22 PM - No movement up or down in transactions for 25 minutes.
4:33 PM - Maurice joins call
4:41 PM – Looks like the backlog is being processed in batches because there’s been no movement. OBIM putting in call to CJIS
4:53 PM – John Bassett spoke with CJIS and they admit to still having issues & in working off the backlog
5:04pm – OBIM needed authorization to restart a new router
5:10 PM – Current backlog still holding at 61 Booking with 49 Over SLA.
5:31 PM – OBIM is reaching back out to CJIS because backlog is holding, not increasing or decreasing.
5:35 PM – CJIS had to restart a process again.
6:07 PM – CJIS has 3 people working on the backlog situation.
7:00 PM: Bridge call continues, with no significant change in the backlog. OBIM is currently in the process of compiling a list to provide CJIS for manual processing.
8:00 PM: Bridge call continues, with no significant change in the backlog. CJIS continues to troubleshooting efforts to remediate the situation. Backlog of transactions have been to provide CJIS for manual processing.
8:55 PM: Resolved: The backlog has returned to normal after CJIS has fixed the issue on their end. We are now processing Booking transactions in real time with a total of 8 transactions in the backlog currently. None of these transactions are over SLA. | | | CJIS SItuational Awareness Impacting e3 Biometrics | CBP OFO | NGI/CJIS | Resolved: The backlog has returned to normal after CJIS has fixed the issue on their end.
CJIS did not provide a resolution or root cause | TBD | CJIS | OBIM, CBP Duty Officers, CJIS Watch Commander | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was availabe during this time, transaction responses were delayed | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/17/18 20:55 | 1/18/18 00:35 | 3:40 | N/A | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed no responses from the Department of Justice (DOJ) Division, for Joint Automated Booking System(JABS) for booking (BKG) and Ten Print Response (TPRS) transactions. OBIM is reaching out to the engineers that were contacted for the earlier occurrence of network connectivity issues on Department of Justice (DOJ) & OneNet VPN connection . | | OBP;#OFO;#OFO/SIGMA | | | 1/17/18 22:45 | OBIM | 9728985 | Yes | N/A |
Update 1: Due to VPN tunnel issues, DHS OneNet is conducting an Emergency Break Fix (EBF) at DC1 to address the sporadic processing of JABS transactions. Engineers are reaching out to CISCO to assist in trouble shooting efforts.
Resolved: After the EBF (Emergency Break Fix) was completed successfully to fix the issue with the kyrpto Mapps by DHS OneNet, the backlog has now been drained to 9 transactions. All Booking transactions are now processing in real time and the VPN Tunnel has been restored. OBIM and e3 support are able to confirm that all transactions are now back with SLA. | | | DHS One Net impacting responses from (JABS) Second Occurrence | CBP OFO | | Resolved: After the EBF (Emergency Break Fix) was completed successfully to fix the issue with the kyrpto Mapps by DHS OneNet, the backlog has now been drained to 9 transactions. All Booking transactions are now processing in real time and the VPN Tunnel has been restored. OBIM and e3 support are able to confirm that all transactions are now back with SLA. | TBD | TBD | DHS One Net, JABS Network Engineers, EMSG, OBIM PAS | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | TBD | N/A | e3 biometrics was available, transaction responses were not returning | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/17/18 20:55 | 1/18/18 00:35 | 3:40 | N/A | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed no responses from the Department of Justice (DOJ) Division, for Joint Automated Booking System(JABS) for booking (BKG) and Ten Print Response (TPRS) transactions. OBIM is reaching out to the engineers that were contacted for the earlier occurrence of network connectivity issues on Department of Justice (DOJ) & OneNet VPN connection . | | OBP;#OFO;#OFO/SIGMA | | | 1/17/18 22:45 | OBIM | 9728985 | Yes | N/A |
Update 1: Due to VPN tunnel issues, DHS OneNet is conducting an Emergency Break Fix (EBF) at DC1 to address the sporadic processing of JABS transactions. Engineers are reaching out to CISCO to assist in trouble shooting efforts.
Resolved: After the EBF (Emergency Break Fix) was completed successfully to fix the issue with the kyrpto Mapps by DHS OneNet, the backlog has now been drained to 9 transactions. All Booking transactions are now processing in real time and the VPN Tunnel has been restored. OBIM and e3 support are able to confirm that all transactions are now back with SLA. | | | DHS One Net impacting responses from (JABS) Second Occurrence | CBP OFO | | Resolved: After the EBF (Emergency Break Fix) was completed successfully to fix the issue with the kyrpto Mapps by DHS OneNet, the backlog has now been drained to 9 transactions. All Booking transactions are now processing in real time and the VPN Tunnel has been restored. OBIM and e3 support are able to confirm that all transactions are now back with SLA. | TBD | TBD | DHS One Net, JABS Network Engineers, EMSG, OBIM PAS | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | TBD | N/A | e3 biometrics was available, transaction responses were not returning | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/17/18 13:20 | 1/17/18 20:45 | 7:25 | N/A | |
Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed that there are no responses from the Department of Justice (DOJ) Division, for Joint Automated Booking System(JABS) for booking (BKG) and Ten Print Response (TPRS) transactions. . (OBIM) has reached out to JABS and they acknowledged an issue, OBIM has also reached out to One Net to provide assistance | | OBP;#OFO;#OFO/SIGMA | | | 1/17/18 14:05 | OBIM | 9728985 | Yes | N/A |
1:10 PM Original Outage started 1/17/18
1:46 PM OBIM Helpdesk Notified
2:05 PM E3 Support Notified by OBIM email
2:30 PM Bridge Call spun up by OBIM, currently waiting on EMSG to join/assist our bridge. Current counts TPRS 122 and Bookings 117.
3:28 PM e3 Support joins Bridge Call
3:43 PM OneNet States they are having trouble getting someone to join the bridge call.
3:48 PM OneNet sent out ticket # & is actively trying to get someone to join bridge call.
3:50 PM e3 noted 2 emails, 3 Remedy tickets, 1 phone calls so far.
3:51 PM OneNet VPN issue between OBIM & Jabs clarified
3:58 PM OneNet VPN IP address acquired & researched.
4:01 PM John Bassett from OBIM Joins.
4:03 PM Possible Natting issue on OBIM side.
4:04 PM OBIM stated an email said issue could be
4:04 PM JABS network engineer with VPN expertise needed.
4:11 PM Wesley from OneNet is only seeing traffic from DC2 not DC1 tunnel.
4:33 PM OneNet is seeing Traffic now through both DC1 & DC2 but cannot verify its hitting JABS firewall
4:48 PM OBIM got DOJ on phone to assist in remediating issue
4:54 PM George from DOJ joined call.
5:11 PM Hamad Dasty DOJ Network Security team joins call, needing source & destination information to check for changes & devices.
5:29 PM DOJ verified that they are not seeing any traffic from OBIM to DOJ
5:39 PM Suggestion was made to bounce VPN tunnel, not appliance.
5:40 PM Bouncing VPN tunnel has caused traffic to flow once again.
5:45 PM Traffic was flowing both ways now, testing VPN for stability.
5:51 PM JABS is starting to see traffic come in from 1:15pm to 5:15pm
5:52 PM DHS OneNet NOC bounced their tunnel to DOJ and the DOJ bounced their tunnel to reset the connection. At this time the tunnel is up and passing traffic both directions.
6:21 PM OBIM confirmed TPRS and BKG are starting to decrease
6:24 PM Successful bounce of VPN services on DoJ & OneNet
6:30 PM Engineers have observed transactions processing at this time and continue to monitor the system for stability.
6:45 PM Engineers observed that network traffic on the tunnel had stopped once again are starting to see transactions increase, Duty Officer is reaching out to JABS to have JABS network engineers rejoin the bridge call
7:19 PM DHS OneNet Tier 3 support has joined the bridge call and is troubleshooting the issue. Bridge call is on-going.
7:21 PM Network VPN investigations continue
7:32 PM Amad from DHS OneNet has joined the bridge call
8:00 PM Network VPN investigations continue.
8:10 PM Engineers confirmed Traffic start flowing now on
8:24 PM OBIM confirmed TPRS and BKG are starting to drop
8:26 PM Rachelle request to E3 support to reach out RGV and McAllen station
8: 36PM Brandon long from e3 support confirm there is 6 BKG and 2of them are over the SLA
8:44 PM Rachelle approves e3 support to drop the call
8:49 PM Brandon Long confirmed RGV stating transaction start picking up
8:50 PM e3 support drop the bridge call | | | DHS One Net impacting responses from (JABS) | CBP OFO | DHS OneNet | Resolved: As of 20:45 DoJ JABS & OneNet engineers bounced their tunnels after observing no traffic on the tunnel between the systems, following the refresh of the tunnels traffic is now passing. Modifications were made on the DOJ side to their VPN connection and engineers observed stability. E3 support confirmed the backlog fully depleted and the Office of Biometric Identity Management confirmed transactions were processing in real time. | TBD possibly due to VPN tunnel isues | | DHS OneNet, JABS Network Engineers, OBIM PAS Team, ESMG, CBP Duty Officers, E3 Support | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | DoJ | N/A | e3 biometrics was available, transactions responses were delayed | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/17/18 13:20 | 1/17/18 20:45 | 7:25 | N/A | |
Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed that there are no responses from the Department of Justice (DOJ) Division, for Joint Automated Booking System(JABS) for booking (BKG) and Ten Print Response (TPRS) transactions. . (OBIM) has reached out to JABS and they acknowledged an issue, OBIM has also reached out to One Net to provide assistance | | OBP;#OFO;#OFO/SIGMA | | | 1/17/18 14:05 | OBIM | 9728985 | Yes | N/A |
1:10 PM Original Outage started 1/17/18
1:46 PM OBIM Helpdesk Notified
2:05 PM E3 Support Notified by OBIM email
2:30 PM Bridge Call spun up by OBIM, currently waiting on EMSG to join/assist our bridge. Current counts TPRS 122 and Bookings 117.
3:28 PM e3 Support joins Bridge Call
3:43 PM OneNet States they are having trouble getting someone to join the bridge call.
3:48 PM OneNet sent out ticket # & is actively trying to get someone to join bridge call.
3:50 PM e3 noted 2 emails, 3 Remedy tickets, 1 phone calls so far.
3:51 PM OneNet VPN issue between OBIM & Jabs clarified
3:58 PM OneNet VPN IP address acquired & researched.
4:01 PM John Bassett from OBIM Joins.
4:03 PM Possible Natting issue on OBIM side.
4:04 PM OBIM stated an email said issue could be
4:04 PM JABS network engineer with VPN expertise needed.
4:11 PM Wesley from OneNet is only seeing traffic from DC2 not DC1 tunnel.
4:33 PM OneNet is seeing Traffic now through both DC1 & DC2 but cannot verify its hitting JABS firewall
4:48 PM OBIM got DOJ on phone to assist in remediating issue
4:54 PM George from DOJ joined call.
5:11 PM Hamad Dasty DOJ Network Security team joins call, needing source & destination information to check for changes & devices.
5:29 PM DOJ verified that they are not seeing any traffic from OBIM to DOJ
5:39 PM Suggestion was made to bounce VPN tunnel, not appliance.
5:40 PM Bouncing VPN tunnel has caused traffic to flow once again.
5:45 PM Traffic was flowing both ways now, testing VPN for stability.
5:51 PM JABS is starting to see traffic come in from 1:15pm to 5:15pm
5:52 PM DHS OneNet NOC bounced their tunnel to DOJ and the DOJ bounced their tunnel to reset the connection. At this time the tunnel is up and passing traffic both directions.
6:21 PM OBIM confirmed TPRS and BKG are starting to decrease
6:24 PM Successful bounce of VPN services on DoJ & OneNet
6:30 PM Engineers have observed transactions processing at this time and continue to monitor the system for stability.
6:45 PM Engineers observed that network traffic on the tunnel had stopped once again are starting to see transactions increase, Duty Officer is reaching out to JABS to have JABS network engineers rejoin the bridge call
7:19 PM DHS OneNet Tier 3 support has joined the bridge call and is troubleshooting the issue. Bridge call is on-going.
7:21 PM Network VPN investigations continue
7:32 PM Amad from DHS OneNet has joined the bridge call
8:00 PM Network VPN investigations continue.
8:10 PM Engineers confirmed Traffic start flowing now on
8:24 PM OBIM confirmed TPRS and BKG are starting to drop
8:26 PM Rachelle request to E3 support to reach out RGV and McAllen station
8: 36PM Brandon long from e3 support confirm there is 6 BKG and 2of them are over the SLA
8:44 PM Rachelle approves e3 support to drop the call
8:49 PM Brandon Long confirmed RGV stating transaction start picking up
8:50 PM e3 support drop the bridge call | | | DHS One Net impacting responses from (JABS) | CBP OFO | DHS OneNet | Resolved: As of 20:45 DoJ JABS & OneNet engineers bounced their tunnels after observing no traffic on the tunnel between the systems, following the refresh of the tunnels traffic is now passing. Modifications were made on the DOJ side to their VPN connection and engineers observed stability. E3 support confirmed the backlog fully depleted and the Office of Biometric Identity Management confirmed transactions were processing in real time. | TBD possibly due to VPN tunnel isues | | DHS OneNet, JABS Network Engineers, OBIM PAS Team, ESMG, CBP Duty Officers, E3 Support | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | DoJ | N/A | e3 biometrics was available, transactions responses were delayed | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/15/18 11:00 | 1/15/18 17:45 | 6:45 | N/A | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and Ten Print Response (TPRS) transactions | | OBP;#OFO;#OFO/SIGMA | | | 1/15/18 13:00 | OBIM Help Desk | 9714852 | Yes | N/A | Event start time 11:00AM
1:00 PM – e3 Support received notification from OBIM
1:15 PM – e3 Support members joined the bridge call with OBIM
1:34 PM – Mike Decker from JABS joined the bridge call to investigate the transactions. Mike was given some of the transaction and when he checked those he stated that JABS has not received anything from NGI. There seems to be timeout issue between JABS and CJIS. OBIM is reaching out to CJIS now.
2:05 PM – Joshua Duty Officer joined the bridge call.
2:45 PM – e3 Support reached out to Rachelle Henderson and provided brief update of the situation.
3:31 PM – Brandon Long joined the bridge call
3:40 PM – Jake Bumbrey joined the bridge call
3:45 PM – e3 Support reached out to CJIS Watch Commander and CJIS acknowledge and reported that engineers are working the issue.
4:55 PM – We’re down to 5 transactions over SLA from today & 3 from yesterday. OBIM states there is no way they are going to mitigate any of the transactions beyond yesterday’s date. That issue is still being worked on by CJIS.
5:11 PM – there are 7 transactions over SLA left from today & yesterday. OBIM may close down bridge call as most transactions are processing in real time.
5:32 PM – the 5 over SLA booking from today have been sent to Denise at CJIS for further remediation but she won’t likely see them until tomorrow morning.
5:41 PM – Going to shut down bridge call because OBIM states everything else is processing in real time.
5:43 PM - shut down of Bridge Call, e3 going to monitor for another 2 hours. | | | (CJIS) Situational awareness affecting e3 biometrics | CBP OFO | NGI/CJIS | Following the issues surrounding the CJIS maintenance on 1/14/2018, OBIM sent out a notification for delays in processing booking transactions. OBIM noticed the delays at 11:00 am and notified e3 support at 1:00 pm this afternoon. A bridge call from was established with CBP Duty officers, e3 support and the OBIM PAS team. OBIM engaged CJIS to inform them of the issue as well as provide the backlog of BKG transactions along with JTIDS. CJIS advised they did not have the resources available but would escalate the issue up for further investigation. After multiple attempts to contact CJIS the backlog of transactions started to decline. | TBD | CJIS | OBIM PAS Team, CBP Duty Officers | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | Biometrics was available, Users could not receive IAFIS responses. | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/15/18 11:00 | 1/15/18 17:45 | 6:45 | N/A | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and Ten Print Response (TPRS) transactions | | OBP;#OFO;#OFO/SIGMA | | | 1/15/18 13:00 | OBIM Help Desk | 9714852 | Yes | N/A | Event start time 11:00AM
1:00 PM – e3 Support received notification from OBIM
1:15 PM – e3 Support members joined the bridge call with OBIM
1:34 PM – Mike Decker from JABS joined the bridge call to investigate the transactions. Mike was given some of the transaction and when he checked those he stated that JABS has not received anything from NGI. There seems to be timeout issue between JABS and CJIS. OBIM is reaching out to CJIS now.
2:05 PM – Joshua Duty Officer joined the bridge call.
2:45 PM – e3 Support reached out to Rachelle Henderson and provided brief update of the situation.
3:31 PM – Brandon Long joined the bridge call
3:40 PM – Jake Bumbrey joined the bridge call
3:45 PM – e3 Support reached out to CJIS Watch Commander and CJIS acknowledge and reported that engineers are working the issue.
4:55 PM – We’re down to 5 transactions over SLA from today & 3 from yesterday. OBIM states there is no way they are going to mitigate any of the transactions beyond yesterday’s date. That issue is still being worked on by CJIS.
5:11 PM – there are 7 transactions over SLA left from today & yesterday. OBIM may close down bridge call as most transactions are processing in real time.
5:32 PM – the 5 over SLA booking from today have been sent to Denise at CJIS for further remediation but she won’t likely see them until tomorrow morning.
5:41 PM – Going to shut down bridge call because OBIM states everything else is processing in real time.
5:43 PM - shut down of Bridge Call, e3 going to monitor for another 2 hours. | | | (CJIS) Situational awareness affecting e3 biometrics | CBP OFO | NGI/CJIS | Following the issues surrounding the CJIS maintenance on 1/14/2018, OBIM sent out a notification for delays in processing booking transactions. OBIM noticed the delays at 11:00 am and notified e3 support at 1:00 pm this afternoon. A bridge call from was established with CBP Duty officers, e3 support and the OBIM PAS team. OBIM engaged CJIS to inform them of the issue as well as provide the backlog of BKG transactions along with JTIDS. CJIS advised they did not have the resources available but would escalate the issue up for further investigation. After multiple attempts to contact CJIS the backlog of transactions started to decline. | TBD | CJIS | OBIM PAS Team, CBP Duty Officers | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | Biometrics was available, Users could not receive IAFIS responses. | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/11/18 14:00 | 1/11/18 16:10 | 2:10 | OBIM 507736 | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly. | | OBP;#OFO;#OFO/SIGMA | | | 1/11/18 14:25 | OBIM | 9704441 | Yes | | 2:00pm OBIM notices that CJIS is returning slow responses
2:26pm OBIM Notifies e3 Support of the slowness,
2:36pm e3 Starts Situational Awareness
2:43pm e3 Support joins Bridge call, Nielab monitors & Terry keeps timeline
2:50pm OBIM attempts to reach out to Denice at CJIS
2:59pm OBIM reaches Denice at CJIS & they are investigating issue
3:01pm CJIS states that they see nothing on their side but a few stuck transactions
3:02pm e3 Sends OBIM the 5 transactions that are over SLA (from 8:30am to 12:24pm)
3:28pm OBIM is now seeing the backlog trending down
3:48pm OBIM is looking to Denice to find out why only one of the original 5 TIDS submitted came back
4:00pm Backlog is continuing to slowly drain (18 bookings at the moment)
4:10pm Backlog down to 13 with only 1 of the original TIDS not returned yet. OBIM is going into a monitoring state & closing the bridge | | | (CJIS) Situational awareness affecting e3 biometrics | CBP, OFO | NGI/CJIS | CJIS resolved their issue and normal transactions flow has been restored. | TBD | CJIS | OBIM, CJIS | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was available, IAFIS transactions were not being returned | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/11/18 14:00 | 1/11/18 16:10 | 2:10 | OBIM 507736 | | Incident Description and Impact Statement: E3 Support just received notification from the Office of Biometric Identity Management (OBIM) that their engineers have observed delayed responses from the FBI's Criminal Justice Information Services (CJIS) Division, for Next Generation Identification (NGI) for booking (BKG) and Ten Print Response (TPRS) transactions. E3 support is currently investigating the impact to users and will provide updates as they come. A SIT Rep will be sent out very shortly. | | OBP;#OFO;#OFO/SIGMA | | | 1/11/18 14:25 | OBIM | 9704441 | Yes | | 2:00pm OBIM notices that CJIS is returning slow responses
2:26pm OBIM Notifies e3 Support of the slowness,
2:36pm e3 Starts Situational Awareness
2:43pm e3 Support joins Bridge call, Nielab monitors & Terry keeps timeline
2:50pm OBIM attempts to reach out to Denice at CJIS
2:59pm OBIM reaches Denice at CJIS & they are investigating issue
3:01pm CJIS states that they see nothing on their side but a few stuck transactions
3:02pm e3 Sends OBIM the 5 transactions that are over SLA (from 8:30am to 12:24pm)
3:28pm OBIM is now seeing the backlog trending down
3:48pm OBIM is looking to Denice to find out why only one of the original 5 TIDS submitted came back
4:00pm Backlog is continuing to slowly drain (18 bookings at the moment)
4:10pm Backlog down to 13 with only 1 of the original TIDS not returned yet. OBIM is going into a monitoring state & closing the bridge | | | (CJIS) Situational awareness affecting e3 biometrics | CBP, OFO | NGI/CJIS | CJIS resolved their issue and normal transactions flow has been restored. | TBD | CJIS | OBIM, CJIS | Impact: e3 Biometrics will be available during this outage, and users will be able to submit transactions, however all transactions will be queued and processed in the order received when service is restored. All subjects will remain in holding cells while users are pending response from IAFIS. Users are unable to access the criminal histories of subjects. *Officer Safety* without criminal histories, users do not know who they have in custody and whether or not they have a history of violence. | N/A | CJIS | N/A | e3 Biometrics was available, IAFIS transactions were not being returned | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Assault | 1/11/18 05:10 | 1/11/18 05:55 | 0:45 | N/A | | Incident Description and Impact Statement: Following the scheduled ICE EID maintenance EDME Web Services removed the site down page for all e3 applications. Following the removal of the site down page CBP and OFO users were unsuccessful in launching all e3 applications. E3 support received notification from the Technology Operations Center, that alerts were being generated in the AO portal for e3 applications. A bridge call was established with CBP Duty officers, e3 support, EDME, and the Technology Operation Center to investigate further. During the bridge call e3 applications were recycled, EDME cleared the e3 Detentions apache cache servers, and all e3 applications successfully launched. | | OBP;#OFO;#OFO/SIGMA | | | 1/11/18 05:10 | Technology Operations Center | 9698530 | Yes | N/A | Thu 1/11/2018 5:12 AM – TOC sent alerts for e3 applications being down in the AO Portal
Thu 1/11/2018 5:18 AM – EDME removed site down page for all e3 applications
Thu 1/11/2018 5:33 AM – E3 support verification process unsuccessful
Thu 1/11/2018 5:38 AM – Bridge Call Established / EDME recycled e3 applications
Thu 1/11/2018 5:54 AM – E3 applications verified and were accessible
| | | E3 Service Disrution Following ICE EID Maintenance | CBP OFO | | Bouncing of the servers | Following the EID maintenance the e3 servers were not bounced, which is why the application was unavailable. | EDME | CBP Duty Officers, TOC, OBIM, EDME | CBP and OFO users were unable to access all e3 applications following the scheduled ICE EID maintenance | N/A | EDME | N/A | All e3 applications were unavailable | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Biometrics | 1/11/18 05:10 | 1/11/18 05:55 | 0:45 | N/A | | Incident Description and Impact Statement: Following the scheduled ICE EID maintenance EDME Web Services removed the site down page for all e3 applications. Following the removal of the site down page CBP and OFO users were unsuccessful in launching all e3 applications. E3 support received notification from the Technology Operations Center, that alerts were being generated in the AO portal for e3 applications. A bridge call was established with CBP Duty officers, e3 support, EDME, and the Technology Operation Center to investigate further. During the bridge call e3 applications were recycled, EDME cleared the e3 Detentions apache cache servers, and all e3 applications successfully launched. | | OBP;#OFO;#OFO/SIGMA | | | 1/11/18 05:10 | Technology Operations Center | 9698530 | Yes | N/A | Thu 1/11/2018 5:12 AM – TOC sent alerts for e3 applications being down in the AO Portal
Thu 1/11/2018 5:18 AM – EDME removed site down page for all e3 applications
Thu 1/11/2018 5:33 AM – E3 support verification process unsuccessful
Thu 1/11/2018 5:38 AM – Bridge Call Established / EDME recycled e3 applications
Thu 1/11/2018 5:54 AM – E3 applications verified and were accessible
| | | E3 Service Disrution Following ICE EID Maintenance | CBP OFO | | Bouncing of the servers | Following the EID maintenance the e3 servers were not bounced, which is why the application was unavailable. | EDME | CBP Duty Officers, TOC, OBIM, EDME | CBP and OFO users were unable to access all e3 applications following the scheduled ICE EID maintenance | N/A | EDME | N/A | All e3 applications were unavailable | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 FPQ | 1/11/18 05:10 | 1/11/18 05:55 | 0:45 | N/A | | Incident Description and Impact Statement: Following the scheduled ICE EID maintenance EDME Web Services removed the site down page for all e3 applications. Following the removal of the site down page CBP and OFO users were unsuccessful in launching all e3 applications. E3 support received notification from the Technology Operations Center, that alerts were being generated in the AO portal for e3 applications. A bridge call was established with CBP Duty officers, e3 support, EDME, and the Technology Operation Center to investigate further. During the bridge call e3 applications were recycled, EDME cleared the e3 Detentions apache cache servers, and all e3 applications successfully launched. | | OBP;#OFO;#OFO/SIGMA | | | 1/11/18 05:10 | Technology Operations Center | 9698530 | Yes | N/A | Thu 1/11/2018 5:12 AM – TOC sent alerts for e3 applications being down in the AO Portal
Thu 1/11/2018 5:18 AM – EDME removed site down page for all e3 applications
Thu 1/11/2018 5:33 AM – E3 support verification process unsuccessful
Thu 1/11/2018 5:38 AM – Bridge Call Established / EDME recycled e3 applications
Thu 1/11/2018 5:54 AM – E3 applications verified and were accessible
| | | E3 Service Disrution Following ICE EID Maintenance | CBP OFO | | Bouncing of the servers | Following the EID maintenance the e3 servers were not bounced, which is why the application was unavailable. | EDME | CBP Duty Officers, TOC, OBIM, EDME | CBP and OFO users were unable to access all e3 applications following the scheduled ICE EID maintenance | N/A | EDME | N/A | All e3 applications were unavailable | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Detentions | 1/11/18 05:10 | 1/11/18 05:55 | 0:45 | N/A | | Incident Description and Impact Statement: Following the scheduled ICE EID maintenance EDME Web Services removed the site down page for all e3 applications. Following the removal of the site down page CBP and OFO users were unsuccessful in launching all e3 applications. E3 support received notification from the Technology Operations Center, that alerts were being generated in the AO portal for e3 applications. A bridge call was established with CBP Duty officers, e3 support, EDME, and the Technology Operation Center to investigate further. During the bridge call e3 applications were recycled, EDME cleared the e3 Detentions apache cache servers, and all e3 applications successfully launched. | | OBP;#OFO;#OFO/SIGMA | | | 1/11/18 05:10 | Technology Operations Center | 9698530 | Yes | N/A | Thu 1/11/2018 5:12 AM – TOC sent alerts for e3 applications being down in the AO Portal
Thu 1/11/2018 5:18 AM – EDME removed site down page for all e3 applications
Thu 1/11/2018 5:33 AM – E3 support verification process unsuccessful
Thu 1/11/2018 5:38 AM – Bridge Call Established / EDME recycled e3 applications
Thu 1/11/2018 5:54 AM – E3 applications verified and were accessible
| | | E3 Service Disrution Following ICE EID Maintenance | CBP OFO | | Bouncing of the servers | Following the EID maintenance the e3 servers were not bounced, which is why the application was unavailable. | EDME | CBP Duty Officers, TOC, OBIM, EDME | CBP and OFO users were unable to access all e3 applications following the scheduled ICE EID maintenance | N/A | EDME | N/A | All e3 applications were unavailable | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 OASISS | 1/11/18 05:10 | 1/11/18 05:55 | 0:45 | N/A | | Incident Description and Impact Statement: Following the scheduled ICE EID maintenance EDME Web Services removed the site down page for all e3 applications. Following the removal of the site down page CBP and OFO users were unsuccessful in launching all e3 applications. E3 support received notification from the Technology Operations Center, that alerts were being generated in the AO portal for e3 applications. A bridge call was established with CBP Duty officers, e3 support, EDME, and the Technology Operation Center to investigate further. During the bridge call e3 applications were recycled, EDME cleared the e3 Detentions apache cache servers, and all e3 applications successfully launched. | | OBP;#OFO;#OFO/SIGMA | | | 1/11/18 05:10 | Technology Operations Center | 9698530 | Yes | N/A | Thu 1/11/2018 5:12 AM – TOC sent alerts for e3 applications being down in the AO Portal
Thu 1/11/2018 5:18 AM – EDME removed site down page for all e3 applications
Thu 1/11/2018 5:33 AM – E3 support verification process unsuccessful
Thu 1/11/2018 5:38 AM – Bridge Call Established / EDME recycled e3 applications
Thu 1/11/2018 5:54 AM – E3 applications verified and were accessible
| | | E3 Service Disrution Following ICE EID Maintenance | CBP OFO | | Bouncing of the servers | Following the EID maintenance the e3 servers were not bounced, which is why the application was unavailable. | EDME | CBP Duty Officers, TOC, OBIM, EDME | CBP and OFO users were unable to access all e3 applications following the scheduled ICE EID maintenance | N/A | EDME | N/A | All e3 applications were unavailable | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Processing | 1/11/18 05:10 | 1/11/18 05:55 | 0:45 | N/A | | Incident Description and Impact Statement: Following the scheduled ICE EID maintenance EDME Web Services removed the site down page for all e3 applications. Following the removal of the site down page CBP and OFO users were unsuccessful in launching all e3 applications. E3 support received notification from the Technology Operations Center, that alerts were being generated in the AO portal for e3 applications. A bridge call was established with CBP Duty officers, e3 support, EDME, and the Technology Operation Center to investigate further. During the bridge call e3 applications were recycled, EDME cleared the e3 Detentions apache cache servers, and all e3 applications successfully launched. | | OBP;#OFO;#OFO/SIGMA | | | 1/11/18 05:10 | Technology Operations Center | 9698530 | Yes | N/A | Thu 1/11/2018 5:12 AM – TOC sent alerts for e3 applications being down in the AO Portal
Thu 1/11/2018 5:18 AM – EDME removed site down page for all e3 applications
Thu 1/11/2018 5:33 AM – E3 support verification process unsuccessful
Thu 1/11/2018 5:38 AM – Bridge Call Established / EDME recycled e3 applications
Thu 1/11/2018 5:54 AM – E3 applications verified and were accessible
| | | E3 Service Disrution Following ICE EID Maintenance | CBP OFO | | Bouncing of the servers | Following the EID maintenance the e3 servers were not bounced, which is why the application was unavailable. | EDME | CBP Duty Officers, TOC, OBIM, EDME | CBP and OFO users were unable to access all e3 applications following the scheduled ICE EID maintenance | N/A | EDME | N/A | All e3 applications were unavailable | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Unplanned Outage | e3 Prosecutions | 1/11/18 05:10 | 1/11/18 05:55 | 0:45 | N/A | | Incident Description and Impact Statement: Following the scheduled ICE EID maintenance EDME Web Services removed the site down page for all e3 applications. Following the removal of the site down page CBP and OFO users were unsuccessful in launching all e3 applications. E3 support received notification from the Technology Operations Center, that alerts were being generated in the AO portal for e3 applications. A bridge call was established with CBP Duty officers, e3 support, EDME, and the Technology Operation Center to investigate further. During the bridge call e3 applications were recycled, EDME cleared the e3 Detentions apache cache servers, and all e3 applications successfully launched. | | OBP;#OFO;#OFO/SIGMA | | | 1/11/18 05:10 | Technology Operations Center | 9698530 | Yes | N/A | Thu 1/11/2018 5:12 AM – TOC sent alerts for e3 applications being down in the AO Portal
Thu 1/11/2018 5:18 AM – EDME removed site down page for all e3 applications
Thu 1/11/2018 5:33 AM – E3 support verification process unsuccessful
Thu 1/11/2018 5:38 AM – Bridge Call Established / EDME recycled e3 applications
Thu 1/11/2018 5:54 AM – E3 applications verified and were accessible
| | | E3 Service Disrution Following ICE EID Maintenance | CBP OFO | | Bouncing of the servers | Following the EID maintenance the e3 servers were not bounced, which is why the application was unavailable. | EDME | CBP Duty Officers, TOC, OBIM, EDME | CBP and OFO users were unable to access all e3 applications following the scheduled ICE EID maintenance | N/A | EDME | N/A | All e3 applications were unavailable | N/A | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Assault | 1/11/18 02:00 | 1/11/18 05:00 | 3:00 | N/A | N/A | The scheduled outage outlined below is now complete.
Notification Recipients
Office of Biometric Identity Management (OBIM), Biometric Support Center (BSC), Customs and Border Protection (CBP), Criminal Justice Information System (CJIS), United States Coast Guard (USCG), Department of State (DoS), Arrival and Departure Information System (ADIS), Foreign Terrorist Tracking Task Force (FTTTF), Immigration and Customs Enforcement (ICE), United Kingdom-Visa (UK-Visa), United States Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS) Headquarters.
Scheduled Maintenance Notification Information
Summary: Immigration and Customs Enforcement (ICE) will be performing a scheduled database outage of the Enforcement Integrated Database (EID) on Thursday, January 11th, 2018 between 2:00AM and 5:00AM (3 hours).
Systems/Users Impacted: All applications that use EID (E3 Processing/E3 Biometrics, EAGLE, CAIS, LYNX) will be unavailable during that time.
Incident Ticket(s): OBIM 507616 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 9698530 | Yes | | | 1/11/18 05:00 | 1/11/18 02:00 | Out Of Cycle EID Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Biometrics | 1/11/18 02:00 | 1/11/18 05:00 | 3:00 | N/A | N/A | The scheduled outage outlined below is now complete.
Notification Recipients
Office of Biometric Identity Management (OBIM), Biometric Support Center (BSC), Customs and Border Protection (CBP), Criminal Justice Information System (CJIS), United States Coast Guard (USCG), Department of State (DoS), Arrival and Departure Information System (ADIS), Foreign Terrorist Tracking Task Force (FTTTF), Immigration and Customs Enforcement (ICE), United Kingdom-Visa (UK-Visa), United States Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS) Headquarters.
Scheduled Maintenance Notification Information
Summary: Immigration and Customs Enforcement (ICE) will be performing a scheduled database outage of the Enforcement Integrated Database (EID) on Thursday, January 11th, 2018 between 2:00AM and 5:00AM (3 hours).
Systems/Users Impacted: All applications that use EID (E3 Processing/E3 Biometrics, EAGLE, CAIS, LYNX) will be unavailable during that time.
Incident Ticket(s): OBIM 507616 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 9698530 | Yes | | | 1/11/18 05:00 | 1/11/18 02:00 | Out Of Cycle EID Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 FPQ | 1/11/18 02:00 | 1/11/18 05:00 | 3:00 | N/A | N/A | The scheduled outage outlined below is now complete.
Notification Recipients
Office of Biometric Identity Management (OBIM), Biometric Support Center (BSC), Customs and Border Protection (CBP), Criminal Justice Information System (CJIS), United States Coast Guard (USCG), Department of State (DoS), Arrival and Departure Information System (ADIS), Foreign Terrorist Tracking Task Force (FTTTF), Immigration and Customs Enforcement (ICE), United Kingdom-Visa (UK-Visa), United States Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS) Headquarters.
Scheduled Maintenance Notification Information
Summary: Immigration and Customs Enforcement (ICE) will be performing a scheduled database outage of the Enforcement Integrated Database (EID) on Thursday, January 11th, 2018 between 2:00AM and 5:00AM (3 hours).
Systems/Users Impacted: All applications that use EID (E3 Processing/E3 Biometrics, EAGLE, CAIS, LYNX) will be unavailable during that time.
Incident Ticket(s): OBIM 507616 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 9698530 | Yes | | | 1/11/18 05:00 | 1/11/18 02:00 | Out Of Cycle EID Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Detentions | 1/11/18 02:00 | 1/11/18 05:00 | 3:00 | N/A | N/A | The scheduled outage outlined below is now complete.
Notification Recipients
Office of Biometric Identity Management (OBIM), Biometric Support Center (BSC), Customs and Border Protection (CBP), Criminal Justice Information System (CJIS), United States Coast Guard (USCG), Department of State (DoS), Arrival and Departure Information System (ADIS), Foreign Terrorist Tracking Task Force (FTTTF), Immigration and Customs Enforcement (ICE), United Kingdom-Visa (UK-Visa), United States Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS) Headquarters.
Scheduled Maintenance Notification Information
Summary: Immigration and Customs Enforcement (ICE) will be performing a scheduled database outage of the Enforcement Integrated Database (EID) on Thursday, January 11th, 2018 between 2:00AM and 5:00AM (3 hours).
Systems/Users Impacted: All applications that use EID (E3 Processing/E3 Biometrics, EAGLE, CAIS, LYNX) will be unavailable during that time.
Incident Ticket(s): OBIM 507616 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 9698530 | Yes | | | 1/11/18 05:00 | 1/11/18 02:00 | Out Of Cycle EID Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 OASISS | 1/11/18 02:00 | 1/11/18 05:00 | 3:00 | N/A | N/A | The scheduled outage outlined below is now complete.
Notification Recipients
Office of Biometric Identity Management (OBIM), Biometric Support Center (BSC), Customs and Border Protection (CBP), Criminal Justice Information System (CJIS), United States Coast Guard (USCG), Department of State (DoS), Arrival and Departure Information System (ADIS), Foreign Terrorist Tracking Task Force (FTTTF), Immigration and Customs Enforcement (ICE), United Kingdom-Visa (UK-Visa), United States Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS) Headquarters.
Scheduled Maintenance Notification Information
Summary: Immigration and Customs Enforcement (ICE) will be performing a scheduled database outage of the Enforcement Integrated Database (EID) on Thursday, January 11th, 2018 between 2:00AM and 5:00AM (3 hours).
Systems/Users Impacted: All applications that use EID (E3 Processing/E3 Biometrics, EAGLE, CAIS, LYNX) will be unavailable during that time.
Incident Ticket(s): OBIM 507616 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 9698530 | Yes | | | 1/11/18 05:00 | 1/11/18 02:00 | Out Of Cycle EID Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Processing | 1/11/18 02:00 | 1/11/18 05:00 | 3:00 | N/A | N/A | The scheduled outage outlined below is now complete.
Notification Recipients
Office of Biometric Identity Management (OBIM), Biometric Support Center (BSC), Customs and Border Protection (CBP), Criminal Justice Information System (CJIS), United States Coast Guard (USCG), Department of State (DoS), Arrival and Departure Information System (ADIS), Foreign Terrorist Tracking Task Force (FTTTF), Immigration and Customs Enforcement (ICE), United Kingdom-Visa (UK-Visa), United States Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS) Headquarters.
Scheduled Maintenance Notification Information
Summary: Immigration and Customs Enforcement (ICE) will be performing a scheduled database outage of the Enforcement Integrated Database (EID) on Thursday, January 11th, 2018 between 2:00AM and 5:00AM (3 hours).
Systems/Users Impacted: All applications that use EID (E3 Processing/E3 Biometrics, EAGLE, CAIS, LYNX) will be unavailable during that time.
Incident Ticket(s): OBIM 507616 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 9698530 | Yes | | | 1/11/18 05:00 | 1/11/18 02:00 | Out Of Cycle EID Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |
| Planned Outage | e3 Prosecutions | 1/11/18 02:00 | 1/11/18 05:00 | 3:00 | N/A | N/A | The scheduled outage outlined below is now complete.
Notification Recipients
Office of Biometric Identity Management (OBIM), Biometric Support Center (BSC), Customs and Border Protection (CBP), Criminal Justice Information System (CJIS), United States Coast Guard (USCG), Department of State (DoS), Arrival and Departure Information System (ADIS), Foreign Terrorist Tracking Task Force (FTTTF), Immigration and Customs Enforcement (ICE), United Kingdom-Visa (UK-Visa), United States Citizenship and Immigration Services (USCIS), Department of Homeland Security (DHS) Headquarters.
Scheduled Maintenance Notification Information
Summary: Immigration and Customs Enforcement (ICE) will be performing a scheduled database outage of the Enforcement Integrated Database (EID) on Thursday, January 11th, 2018 between 2:00AM and 5:00AM (3 hours).
Systems/Users Impacted: All applications that use EID (E3 Processing/E3 Biometrics, EAGLE, CAIS, LYNX) will be unavailable during that time.
Incident Ticket(s): OBIM 507616 | Enforcement Integrated Database (ICE/EID) | OBP;#OFO | | | | | 9698530 | Yes | | | 1/11/18 05:00 | 1/11/18 02:00 | Out Of Cycle EID Maintenance | | ICE/EID | | | | | | | | | | | Item | sites/OIT/bems/tio/bemsus/Lists/OT |